1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-20 11:33:24 +02:00
Commit Graph

467 Commits

Author SHA1 Message Date
Chris Lattner
19eb0dad26 Reimplement rip-relative addressing in the X86-64 backend. The new
implementation primarily differs from the former in that the asmprinter
doesn't make a zillion decisions about whether or not something will be
RIP relative or not.  Instead, those decisions are made by isel lowering
and propagated through to the asm printer.  To achieve this, we:

1. Represent RIP relative addresses by setting the base of the X86 addr
   mode to X86::RIP.
2. When ISel Lowering decides that it is safe to use RIP, it lowers to
   X86ISD::WrapperRIP.  When it is unsafe to use RIP, it lowers to
   X86ISD::Wrapper as before.
3. This removes isRIPRel from X86ISelAddressMode, representing it with
   a basereg of RIP instead.
4. The addressing mode matching logic in isel is greatly simplified.
5. The asmprinter is greatly simplified, notably the "NotRIPRel" predicate
   passed through various printoperand routines is gone now.
6. The various symbol printing routines in asmprinter now no longer infer
   when to emit (%rip), they just print the symbol.

I think this is a big improvement over the previous situation.  It does have
two small caveats though: 1. I implemented a horrible "no-rip" modifier for
the inline asm "P" constraint modifier.  This is a short term hack, there is
a much better, but more involved, solution.  2. I had to xfail an 
-aggressive-remat testcase because it isn't handling the use of RIP in the
constant-pool reading instruction.  This specific test is easy to fix without
-aggressive-remat, which I intend to do next.

llvm-svn: 74372
2009-06-27 04:16:01 +00:00
Chris Lattner
ae824fc834 make sure to propagate operand flags in SelectTLSADDRAddr properly.
llvm-svn: 74326
2009-06-26 21:18:37 +00:00
Chris Lattner
497d0ec530 fix a pasto.
llvm-svn: 74275
2009-06-26 05:56:49 +00:00
Chris Lattner
93af5249a1 propagate target operand flags through addressing mode selection.
llvm-svn: 74272
2009-06-26 05:51:45 +00:00
Chris Lattner
580eecebbd change TLS_ADDR lowering to lower to a real mem operand, instead of matching as
a global with that gets printed with the :mem modifier.  All operands to lea's 
should be handled with the lea32mem operand kind, and this allows the TLS stuff
to do this.  There are several better ways to do this, but I went for the minimal
change since I can't really test this (beyond make check).

This also makes the use of EBX explicit in the operand list in the 32-bit, 
instead of implicit in the instruction.

llvm-svn: 73834
2009-06-20 20:38:48 +00:00
Dan Gohman
691dd710e9 Remove the redundant TM member from X86DAGToDAGISel; replace it
with an accessor method which simply casts the parent class
SelectionDAGISel's TM to the target-specific type.

llvm-svn: 72801
2009-06-03 20:20:00 +00:00
Dan Gohman
0edabc8a6f Convert a subtract into a negate and an add when it helps x86
address folding.

llvm-svn: 71446
2009-05-11 18:02:53 +00:00
Anton Korobeynikov
b3dc881070 Factor out cycle-finder code and make it generic.
llvm-svn: 71241
2009-05-08 18:51:58 +00:00
Bill Wendling
40a162f75f Instead of passing in an unsigned value for the optimization level, use an enum,
which better identifies what the optimization is doing. And is more flexible for
future uses.

llvm-svn: 70440
2009-04-29 23:29:43 +00:00
Bill Wendling
7546bed590 Second attempt:
Massive check in. This changes the "-fast" flag to "-O#" in llc. If you want to
use the old behavior, the flag is -O0. This change allows for finer-grained
control over which optimizations are run at different -O levels.

Most of this work was pretty mechanical. The majority of the fixes came from
verifying that a "fast" variable wasn't used anymore. The JIT still uses a
"Fast" flag. I'll change the JIT with a follow-up patch.

llvm-svn: 70343
2009-04-29 00:15:41 +00:00
Bill Wendling
ef47ace92f r70270 isn't ready yet. Back this out. Sorry for the noise.
llvm-svn: 70275
2009-04-28 01:04:53 +00:00
Bill Wendling
2799e916c3 Massive check in. This changes the "-fast" flag to "-O#" in llc. If you want to
use the old behavior, the flag is -O0. This change allows for finer-grained
control over which optimizations are run at different -O levels.

Most of this work was pretty mechanical. The majority of the fixes came from
verifying that a "fast" variable wasn't used anymore. The JIT still uses a
"Fast" flag. I'm not 100% sure if it's necessary to change it there...

llvm-svn: 70270
2009-04-28 00:21:31 +00:00
Rafael Espindola
a07d1c3103 fix PR3995. A scale must be 1, 2, 4 or 8.
llvm-svn: 69284
2009-04-16 12:34:53 +00:00
Dan Gohman
365c457893 For the h-register addressing-mode trick, use the correct value for
any non-address uses of the address value. This fixes 186.crafty.

llvm-svn: 69094
2009-04-14 22:45:05 +00:00
Dan Gohman
be7227005f Implement x86 h-register extract support.
- Add patterns for h-register extract, which avoids a shift and mask,
   and in some cases a temporary register.
 - Add address-mode matching for turning (X>>(8-n))&(255<<n), where
   n is a valid address-mode scale value, into an h-register extract
   and a scaled-offset address.
 - Replace X86's MOV32to32_ and related instructions with the new
   target-independent COPY_TO_SUBREG instruction.

On x86-64 there are complicated constraints on h registers, and
CodeGen doesn't currently provide a high-level way to express all of them,
so they are handled with a bunch of special code. This code currently only
supports extracts where the result is used by a zero-extend or a store,
though these are fairly common.

These transformations are not always beneficial; since there are only
4 h registers, they sometimes require extra move instructions, and
this sometimes increases register pressure because it can force out
values that would otherwise be in one of those registers. However,
this appears to be relatively uncommon.

llvm-svn: 68962
2009-04-13 16:09:41 +00:00
Dan Gohman
f117bbdbcd Remove x86's special-case handling for ISD::TRUNCATE and
ISD::SIGN_EXTEND_INREG. Tablegen-generated code can handle
these cases, and the scheduling issues observed earlier
appear to be resolved now.

llvm-svn: 68959
2009-04-13 15:29:31 +00:00
Dan Gohman
e1db797df3 Use X86::SUBREG_8BIT instead of hard-coding the equivalent constant.
llvm-svn: 68951
2009-04-13 15:14:03 +00:00
Rafael Espindola
72347bffce X86-64 TLS support for local exec and initial exec.
llvm-svn: 68947
2009-04-13 13:02:49 +00:00
Rafael Espindola
ad8137187c In X86DAGToDAGISel::MatchWrapper, if base or index are set, avoid matching
only if symbolic addresses are RIP relatives.

llvm-svn: 68924
2009-04-12 23:00:38 +00:00
Rafael Espindola
2b0a01bda9 refactor some code into X86DAGToDAGISel::MatchWrapper
llvm-svn: 68915
2009-04-12 21:55:03 +00:00
Rafael Espindola
88986ef511 Don't fold a load if the other operand is a TLS address.
With this we generate

movl    %gs:0, %eax
leal    i@NTPOFF(%eax), %eax

instead of

movl    $i@NTPOFF, %eax
addl    %gs:0, %eax

llvm-svn: 68778
2009-04-10 10:09:34 +00:00
Rafael Espindola
7eb72dc5f2 Re-apply 68552.
Tested by bootstrapping llvm-gcc and using that to build llvm.

llvm-svn: 68645
2009-04-08 21:14:34 +00:00
Bill Wendling
6e702cf68c Temporarily revert r68552. This was causing a failure in the self-hosting LLVM
builds.

--- Reverse-merging (from foreign repository) r68552 into '.':
U    test/CodeGen/X86/tls8.ll
U    test/CodeGen/X86/tls10.ll
U    test/CodeGen/X86/tls2.ll
U    test/CodeGen/X86/tls6.ll
U    lib/Target/X86/X86Instr64bit.td
U    lib/Target/X86/X86InstrSSE.td
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86RegisterInfo.cpp
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86CodeEmitter.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86InstrInfo.h
U    lib/Target/X86/X86ISelDAGToDAG.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U    lib/Target/X86/X86ISelLowering.h
U    lib/Target/X86/X86InstrInfo.cpp
U    lib/Target/X86/X86InstrBuilder.h
U    lib/Target/X86/X86RegisterInfo.td

llvm-svn: 68560
2009-04-07 22:35:25 +00:00
Rafael Espindola
0324937229 Reduce code duplication on the TLS implementation.
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.

Will work on it and on X86-64 support.

llvm-svn: 68552
2009-04-07 21:37:46 +00:00
Rafael Espindola
3d866ac20c remove unused arguments.
llvm-svn: 68109
2009-03-31 16:16:57 +00:00
Evan Cheng
5c02e62620 X86 address mode isel tweak. If the base of the address is also used by a CopyToReg (i.e. it's likely live-out), do not fold the sub-expressions into the addressing mode to avoid computing the address twice. The CopyToReg use will be isel'ed to a LEA, re-use it for address instead.
This is not yet enabled.

llvm-svn: 68082
2009-03-31 01:13:53 +00:00
Evan Cheng
3e30bcbd69 When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further.
llvm-svn: 68066
2009-03-30 21:36:47 +00:00
Rafael Espindola
34f59009d1 Use array_lengthof
llvm-svn: 67950
2009-03-28 19:02:18 +00:00
Rafael Espindola
7c113e5354 Use less hard coded constants to make the code less brittle.
llvm-svn: 67846
2009-03-27 15:45:05 +00:00
Dan Gohman
e7495ef7aa Don't forego folding of loads into 64-bit adds when the other
operand is a signed 32-bit immediate. Unlike with the 8-bit
signed immediate case, it isn't actually smaller to fold a
32-bit signed immediate instead of a load. In fact, it's
larger in the case of 32-bit unsigned immediates, because
they can be materialized with movl instead of movq.

llvm-svn: 67001
2009-03-14 02:07:16 +00:00
Dan Gohman
37d843c129 Enhance address-mode folding of ISD::ADD to handle cases where the
operands can't both be fully folded at the same time. For example,
in the included testcase, a global variable is being added with
an add of two values. The global variable wants RIP-relative
addressing, so it can't share the address with another base
register, but it's still possible to fold the initial add.

llvm-svn: 66865
2009-03-13 02:25:09 +00:00
Dale Johannesen
560b03bbcd Remove non-DebugLoc versions of BuildMI from X86.
There were some that might even matter in X86FastISel.

llvm-svn: 64437
2009-02-13 02:33:27 +00:00
Chris Lattner
1174b80823 fix the X86 backend to just drop llvm.declare nodes for VLAs instead of
leaving them in the DAG and then getting selection errors.  This is a 
fix for PR3538.

llvm-svn: 64382
2009-02-12 17:33:11 +00:00
Dale Johannesen
b22cb23f6f Use getDebugLoc forwarder instead of getNode()->getDebugLoc.
No functional change.

llvm-svn: 64026
2009-02-07 19:59:05 +00:00
Dan Gohman
8437b9efa1 Refactor some repeated logic into a separate function.
llvm-svn: 63989
2009-02-07 00:43:41 +00:00
Dale Johannesen
e95c76b65e Get rid of one more non-DebugLoc getNode and
its corresponding getTargetNode.  Lots of
caller changes.

llvm-svn: 63904
2009-02-06 01:31:28 +00:00
Dale Johannesen
fa244d6e2d Patch up omissions in DebugLoc propagation.
llvm-svn: 63693
2009-02-04 00:33:20 +00:00
Dale Johannesen
45009f127b DebugLoc propgation
llvm-svn: 63664
2009-02-03 21:48:12 +00:00
Dan Gohman
7d80f8688e Simplify findNonImmUse; return the result using the return value
instead of via a by-reference argument. No functionality change.

llvm-svn: 63118
2009-01-27 19:04:30 +00:00
Dan Gohman
2e0343e321 Eliminate unnecessary operands-list traversals.
llvm-svn: 63088
2009-01-27 02:37:43 +00:00
Evan Cheng
ec03e0cd3b Enhance logic in X86DAGToDAGISel::PreprocessForRMW which move load inside callseq_start to allow it to be folded into a call. It was not considering the cases where a token factor is between the load and the callseq_start.
llvm-svn: 63022
2009-01-26 18:43:34 +00:00
Dan Gohman
704f0d5879 Fix a recent regression. ClrOpcode is not set for i8; for i8, if
we want to clear %ah to zero before a division, just use a
zero-extending mov to %al. This fixes PR3366.

llvm-svn: 62691
2009-01-21 14:50:16 +00:00
Evan Cheng
06cfade044 DIVREM isel deficiency: If sign bit is known zero, zero out DX/EDX/RDX instead of sign extending the low part (in AX/EAX/RAX) into it.
llvm-svn: 62519
2009-01-19 19:06:11 +00:00
Evan Cheng
182d9c4c9f Fix MatchAddress bug that's preventing negative displacement from being folded in 64-bit mode.
llvm-svn: 62413
2009-01-17 07:09:27 +00:00
Dan Gohman
6fcee67989 Move a few containers out of ScheduleDAGInstrs::BuildSchedGraph
and into the ScheduleDAGInstrs class, so that they don't get
destructed and re-constructed for each block. This fixes a
compile-time hot spot in the post-pass scheduler.

To help facilitate this, tidy and do some minor reorganization
in the scheduler constructor functions.

llvm-svn: 62275
2009-01-15 19:20:50 +00:00
Evan Cheng
ce292ad389 80 col violation.
llvm-svn: 62024
2009-01-10 03:33:22 +00:00
Evan Cheng
487c9ff802 Some code clean up.
llvm-svn: 60850
2008-12-10 21:49:05 +00:00
Evan Cheng
f18016728c On x86 favors folding short immediate into some arithmetic operations (e.g. add, and, xor, etc.) because materializing an immediate in a register is expensive in turns of code size.
e.g.
movl 4(%esp), %eax
addl $4, %eax

is 2 bytes shorter than

movl $4, %eax
addl 4(%esp), %eax

llvm-svn: 60139
2008-11-27 00:49:46 +00:00
Dan Gohman
229c65c05b Move the code that inserts X87 FP_REG_KILL instructions from a
special-purpose hook to a new pass. Also, add check to see if any
x87 virtual registers are used, to avoid doing any work in the
common case that no x87 code is needed.

llvm-svn: 59190
2008-11-12 22:55:05 +00:00
Dan Gohman
fd00e20872 The 32-bit displacement field in an x86 address is signed. Arrange for it
to be sign-extended when it is promoted to 64 bits for intermediate
offset calculations. The offset calculations are done as uint64_t so that
overflow conditions are well defined.

This fixes a problem which is currently hidden by the x86 AsmPrinter but
which was exposed by r58917 (which is temporarily reverted).  See PR3027
for details.

llvm-svn: 59044
2008-11-11 15:52:29 +00:00
Dan Gohman
cd4b68bee9 Eliminate the ISel priority queue, which used the topological order for a
priority function. Instead, just iterate over the AllNodes list, which is
already in topological order. This eliminates a fair amount of bookkeeping,
and speeds up the isel phase by about 15% on many testcases.

The impact on most targets is that AddToISelQueue calls can be simply removed.

In the x86 target, there are two additional notable changes.

The rule-bending AND+SHIFT optimization in MatchAddress that creates new
pre-isel nodes during isel is now a little more verbose, but more robust.
Instead of either creating an invalid DAG or creating an invalid topological
sort, as it has historically done, it can now just insert the new nodes into
the node list at a position where they will be consistent with the topological
ordering.

Also, the address-matching code has logic that checked to see if a node was
"already selected". However, when a node is selected, it has all its uses
taken away via ReplaceAllUsesWith or equivalent, so it won't recieve any
further visits from MatchAddress. This code is now removed.

llvm-svn: 58748
2008-11-05 04:14:16 +00:00
Dan Gohman
0ba8aad1af The ANDMask node folds to a constant, and isn't the node that needs to
have its node id set. The new and and shift nodes are the nodes that need
the IDs. This fixes PR2982.

llvm-svn: 58655
2008-11-03 23:43:55 +00:00
David Greene
93f9f0f718 Have TableGen emit setSubgraphColor calls under control of a -gen-debug
flag.  Then in a debugger developers can set breakpoints at these calls
to see waht is about to be selected and what the resulting subgraph
looks like.  This really helps when debugging instruction selection.

llvm-svn: 58278
2008-10-27 21:56:29 +00:00
Dan Gohman
15597f07b2 Teach DAGCombine to fold constant offsets into GlobalAddress nodes,
and add a TargetLowering hook for it to use to determine when this
is legal (i.e. not in PIC mode, etc.)

This allows instruction selection to emit folded constant offsets
in more cases, such as the included testcase, eliminating the need
for explicit arithmetic instructions.

This eliminates the need for the C++ code in X86ISelDAGToDAG.cpp
that attempted to achieve the same effect, but wasn't as effective.

Also, fix handling of offsets in GlobalAddressSDNodes in several
places, including changing GlobalAddressSDNode's offset from
int to int64_t.

The Mips, Alpha, Sparc, and CellSPU targets appear to be
unaware of GlobalAddress offsets currently, so set the hook to
false on those targets.

llvm-svn: 57748
2008-10-18 02:06:02 +00:00
Dan Gohman
90f776986d Trim #includes.
llvm-svn: 57649
2008-10-16 20:18:31 +00:00
Evan Cheng
39a819027c Fix indentation.
llvm-svn: 57508
2008-10-14 17:15:39 +00:00
Dan Gohman
e08e0dcfcc When doing the very-late shift-and address-mode optimization,
create a new DAG node to represent the new shift to keep the
DAG consistent, even though it'll almost always be folded into
the address.

If a user of the resulting address has multiple uses, the
nodes may get revisited by a later MatchAddress call, in which
case DAG inconsistencies do matter.

This fixes PR2849.

llvm-svn: 57465
2008-10-13 20:52:04 +00:00
Devang Patel
07a9ce22e4 It is possible that all functions in one module are not being
optimized for size. Set OptForSize for each function separately.

llvm-svn: 57182
2008-10-06 18:03:39 +00:00
Dale Johannesen
dc83b95ba5 Make atomic Swap work, 64-bit on x86-32.
Make it all work in non-pic mode.

llvm-svn: 57034
2008-10-03 22:25:52 +00:00
Dale Johannesen
27d8955b8f Pass MemOperand through for 64-bit atomics on 32-bit,
incidentally making the case where the memop is a
pointer deref work.  Fix cmp-and-swap regression.

llvm-svn: 57027
2008-10-03 19:41:08 +00:00
Dan Gohman
00034b1416 Avoid creating two TargetLowering objects for each target.
Instead, just create one, and make sure everything that needs
it can access it. Previously most of the SelectionDAGISel
subclasses all had their own TargetLowering object, which was
redundant with the TargetLowering object in the TargetMachine
subclasses, except on Sparc, where SparcTargetMachine
didn't have a TargetLowering object. Change Sparc to work
more like the other targets here.

llvm-svn: 57016
2008-10-03 16:55:19 +00:00
Dan Gohman
e41ac02dce Remove an unused field.
llvm-svn: 57014
2008-10-03 16:17:33 +00:00
Dan Gohman
30c5ce1b7d Switch the MachineOperand accessors back to the short names like
isReg, etc., from isRegister, etc.

llvm-svn: 57006
2008-10-03 15:45:36 +00:00
Dale Johannesen
dbd7b1bd33 Handle some 64-bit atomics on x86-32, some of the time.
llvm-svn: 56963
2008-10-02 18:53:47 +00:00
Devang Patel
a5cda569d3 Remove OptimizeForSize global. Use function attribute optsize.
llvm-svn: 56937
2008-10-01 23:18:38 +00:00
Dan Gohman
5a2169ee6e Optimize SelectionDAG's AssignTopologicalOrder even further.
Completely eliminate the TopOrder std::vector. Instead, sort
the AllNodes list in place. This also eliminates the need to
call AllNodes.size(), a linear-time operation, before
performing the sort.

Also, eliminate the Sources temporary std::vector, since it
essentially duplicates the sorted result as it is being
built.

This also changes the direction of the topological sort
from bottom-up to top-down. The AllNodes list starts out in
roughly top-down order, so this reduces the amount of
reordering needed. Top-down is also more convenient for
Legalize, and ISel needed only minor adjustments.

llvm-svn: 56867
2008-09-30 18:30:35 +00:00
Dan Gohman
c28f40a821 Move the GlobalBaseReg field out of X86ISelDAGToDAG.cpp
and X86FastISel.cpp into X86MachineFunction.h, so that it
can be shared, instead of having each selector keep track
of its own.

llvm-svn: 56825
2008-09-30 00:58:23 +00:00
Daniel Dunbar
dc1d75b1b5 Unbreak build.
llvm-svn: 56727
2008-09-27 00:22:09 +00:00
Evan Cheng
d63fc80c1e Implement "punpckldq %xmm0, $xmm0" as "pshufd $0x50, %xmm0, %xmm" unless optimizing for code size.
llvm-svn: 56711
2008-09-26 23:41:32 +00:00
Dan Gohman
989db64c93 Rename ConstantSDNode's getSignExtended to getSExtValue, for
consistancy with ConstantInt, and re-implement it in terms
of ConstantInt's getSExtValue.

llvm-svn: 56700
2008-09-26 21:54:37 +00:00
Dan Gohman
56b0c33a9d Factor out the code for determining when symblic addresses
require RIP-relative addressing and use it to fix a bug
in X86FastISel in x86-64 PIC mode, where it was trying to
use base/index registers with RIP-relative addresses. This
fixes a bunch of x86-64 testsuite failures.

llvm-svn: 56676
2008-09-26 19:15:30 +00:00
Evan Cheng
f942615847 Properly handle 'm' inline asm constraints. If a GV is being selected for the addressing mode, it requires the same logic for PIC relative addressing, etc.
llvm-svn: 56526
2008-09-24 00:05:32 +00:00
Dan Gohman
e21054687a Delete an unused function.
llvm-svn: 56495
2008-09-23 18:26:47 +00:00
Dan Gohman
6bed18a334 Move the code for initializing the global base reg out of
X86ISelDAGToDAG.cpp and into X86InstrInfo.cpp. This will allow
it to be reused by FastISel.

llvm-svn: 56494
2008-09-23 18:22:58 +00:00
Dan Gohman
9475c22537 Simplify and generalize X86DAGToDAGISel::CanBeFoldedBy, and draw
up some new ascii art to illustrate what it does. This change
currently has no effect on generated code.

llvm-svn: 56270
2008-09-17 01:39:10 +00:00
Bill Wendling
932818c75a Reverting r56249. On further investigation, this functionality isn't needed.
Apologies for the thrashing.

llvm-svn: 56251
2008-09-16 21:48:12 +00:00
Bill Wendling
1a240c8033 - Change "ExternalSymbolSDNode" to "SymbolSDNode".
- Add linkage to SymbolSDNode (default to external).
- Change ISD::ExternalSymbol to ISD::Symbol.
- Change ISD::TargetExternalSymbol to ISD::TargetSymbol

These changes pave the way to allowing SymbolSDNodes with non-external linkage.

llvm-svn: 56249
2008-09-16 21:12:30 +00:00
Dan Gohman
89660301e3 Rename ConstantSDNode::getValue to getZExtValue, for consistency
with ConstantInt. This led to fixing a bug in TargetLowering.cpp
using getValue instead of getAPIntValue.

llvm-svn: 56159
2008-09-12 16:56:44 +00:00
Gabor Greif
7db742d8c2 fix a bunch of 80-col violations
llvm-svn: 55588
2008-08-31 15:37:04 +00:00
Gabor Greif
86c795a8ca erect abstraction boundaries for accessing SDValue members, rename Val -> Node to reflect semantics
llvm-svn: 55504
2008-08-28 21:40:38 +00:00
Gabor Greif
4b86114f92 disallow direct access to SDValue::ResNo, provide a getter instead
llvm-svn: 55394
2008-08-26 22:36:50 +00:00
Evan Cheng
569b489cf5 Try approach to moving call address load inside of callseq_start. Now it's done during the preprocess of x86 isel. callseq_start's chain is changed to load's chain node; while load's chain is the last of callseq_start or the loads or copytoreg nodes inserted to move arguments to the right spot.
llvm-svn: 55338
2008-08-25 21:27:18 +00:00
Dan Gohman
a9d5f9b006 Move the point at which FastISel taps into the SelectionDAGISel
process up to a higher level. This allows FastISel to leverage
more of SelectionDAGISel's infastructure, such as updating Machine
PHI nodes.

Also, implement transitioning from SDISel back to FastISel in
the middle of a block, so it's now possible to go back and
forth. This allows FastISel to hand individual CallInsts and other
complicated things off to SDISel to handle, while handling the rest
of the block itself.

To help support this, reorganize the SelectionDAG class so that it
is allocated once and reused throughout a function, instead of
being completely reallocated for each block.

llvm-svn: 55219
2008-08-23 02:25:05 +00:00
Dan Gohman
4b801d38a1 Simplify SelectRoot's interface, and factor out some common code
from all targets.

llvm-svn: 55124
2008-08-21 16:36:34 +00:00
Dan Gohman
411cc551cb Move the handling of ANY_EXTEND, SIGN_EXTEND_INREG, and TRUNCATE
out of X86ISelDAGToDAG.cpp C++ code and into tablegen code.
Among other things, using tablegen for these things makes them
friendlier to FastISel.

Tablegen can handle the case of i8 subregs on x86-32, but currently
the C++ code for that case uses MVT::Flag in a tricky way, and it
happens to schedule better in some cases. So for now, leave the
C++ code in place to handle the i8 case on x86-32.

llvm-svn: 55078
2008-08-20 21:27:32 +00:00
Evan Cheng
6534c78383 Fix a (u)comiss intrinsic lowering bug. It was using anyext which can return junk in higher bits. Patch by Nate Begeman.
llvm-svn: 54903
2008-08-17 19:22:34 +00:00
Dan Gohman
502d2aebff Oops, check in these files too, for the FastISel -> Fast rename.
llvm-svn: 54750
2008-08-13 19:55:00 +00:00
Dale Johannesen
718fcee02d Some fixes for x86-64 JIT. Make it use small code
model, except for external calls; this makes
addressing modes PC-relative.  Incomplete.

The assertion at the top of Emitter::runOnMachineFunction
was obviously bogus (always true) so I removed it.
If someone knows what the correct test should be to cover
all the various targets, please fix.

llvm-svn: 54656
2008-08-11 23:46:25 +00:00
Dan Gohman
9742f7772d Rename SDOperand to SDValue.
llvm-svn: 54128
2008-07-27 21:46:04 +00:00
Dan Gohman
47c5cdbc34 Tidy SDNode::use_iterator, and complete the transition to have it
parallel its analogue, Value::value_use_iterator. The operator* method
now returns the user, rather than the use.

llvm-svn: 54127
2008-07-27 20:43:25 +00:00
Dan Gohman
b91bef08a7 Add titles to the various SelectionDAG viewGraph calls
that include useful information like the name of the
block being viewed and the current phase of compilation.

llvm-svn: 53872
2008-07-21 20:00:07 +00:00
Dan Gohman
8981962672 Add a new function, ReplaceAllUsesOfValuesWith, which handles bulk
replacement of multiple values. This is slightly more efficient
than doing multiple ReplaceAllUsesOfValueWith calls, and theoretically
could be optimized even further. However, an important property of this
new function is that it handles the case where the source value set and
destination value set overlap. This makes it feasible for isel to use
SelectNodeTo in many very common cases, which is advantageous because
SelectNodeTo avoids a temporary node and it doesn't require CSEMap
updates for users of values that don't change position.

Revamp MorphNodeTo, which is what does all the work of SelectNodeTo, to
handle operand lists more efficiently, and to correctly handle a number
of corner cases to which its new wider use exposes it.

This commit also includes a change to the encoding of post-isel opcodes
in SDNodes; now instead of being sandwiched between the target-independent
pre-isel opcodes and the target-dependent pre-isel opcodes, post-isel
opcodes are now represented as negative values. This makes it possible
to test if an opcode is pre-isel or post-isel without having to know
the size of the current target's post-isel instruction set.

These changes speed up llc overall by 3% and reduce memory usage by 10%
on the InstructionCombining.cpp testcase with -fast and -regalloc=local.

llvm-svn: 53728
2008-07-17 19:10:17 +00:00
Dan Gohman
4c8c8e3aad Fix the result type of X86's truncate to i8.
llvm-svn: 53688
2008-07-16 16:20:48 +00:00
Evan Cheng
67ce381ffe Do not use computationally expensive scheduling heuristics with -fast.
llvm-svn: 52971
2008-07-01 18:05:03 +00:00
Evan Cheng
3f664b6fd3 Split scheduling from instruction selection.
llvm-svn: 52923
2008-06-30 20:45:06 +00:00
Evan Cheng
deb754898b Unbreak DECLARE isel in pic mode.
llvm-svn: 52439
2008-06-18 02:48:27 +00:00
Evan Cheng
89e2e3292d Rather than avoiding to wrap ISD::DECLARE GV operand in X86ISD::Wrapper, simply handle it at dagisel time with x86 specific isel code.
llvm-svn: 52377
2008-06-17 02:01:22 +00:00
Duncan Sands
d634afe3aa Wrap MVT::ValueType in a struct to get type safety
and better control the abstraction.  Rename the type
to MVT.  To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits().  Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).

llvm-svn: 52044
2008-06-06 12:08:01 +00:00
Dan Gohman
4e87d82476 Fix a tblgen problem handling variable_ops in tblgen instruction
definitions. This adds a new construct, "discard", for indicating
that a named node in the input matching pattern is to be discarded,
instead of corresponding to a node in the output pattern. This
allows tblgen to know where the arguments for the varaible_ops are
supposed to begin.

This fixes "rdar://5791600", whatever that is ;-).

llvm-svn: 51699
2008-05-29 19:57:41 +00:00
Evan Cheng
4f660778f0 Use movlps / movhps to modify low / high half of 16-byet memory location.
llvm-svn: 51501
2008-05-23 21:23:16 +00:00
Evan Cheng
3493e43afd Handle a few more cases of folding load i64 into xmm and zero top bits.
Note, some of the code will be moved into target independent part of DAG combiner in a subsequent patch.

llvm-svn: 50918
2008-05-09 21:53:03 +00:00
Evan Cheng
f97e716511 Handle vector move / load which zero the destination register top bits (i.e. movd, movq, movss (addr), movsd (addr)) with X86 specific dag combine.
llvm-svn: 50838
2008-05-08 00:57:18 +00:00
Evan Cheng
37ca5de3b7 Not checking for intrinsics which do not have a chain operand.
llvm-svn: 50260
2008-04-25 08:55:28 +00:00
Evan Cheng
e177dc6696 - Switch from std::set to SmallPtrSet.
- Add comments.

llvm-svn: 50259
2008-04-25 08:22:20 +00:00
Chris Lattner
8c9f6c929a Loosen up an assertion to allow intrinsics. I really have no
idea what this code (findNonImmUse) does, so I'm only guessing 
that this is the right thing.  It would be really really nice
if this had comments and perhaps switched to SmallPtrSet
(hint hint) :)

This fixes rdar://5886601, a crash on gcc.target/i386/sse4_1-pblendw.c

llvm-svn: 50252
2008-04-25 05:13:01 +00:00
Roman Levenstein
b40d332929 Re-commit of the r48822, where the infinite looping problem discovered
by Dan Gohman is fixed.

llvm-svn: 49330
2008-04-07 10:06:32 +00:00
Evan Cheng
12d2bbde0d Cosmetic
llvm-svn: 49156
2008-04-03 07:45:18 +00:00
Evan Cheng
497c607fae Backing out 48222 temporarily.
llvm-svn: 49124
2008-04-03 03:13:16 +00:00
Roman Levenstein
55b8822511 Use a linked data structure for the uses lists of an SDNode, just like
LLVM Value/Use does and MachineRegisterInfo/MachineOperand does.
This allows constant time for all uses list maintenance operations.

The idea was suggested by Chris. Reviewed by Evan and Dan.
Patch is tested and approved by Dan.

On normal use-cases compilation speed is not affected. On very big basic
blocks there are compilation speedups in the range of 15-20% or even better. 

llvm-svn: 48822
2008-03-26 12:39:26 +00:00
Chris Lattner
edfc239ced remove Evan's "ugly hack" that sorta attempted to get
x86-64 return conventions correct, but was never enabled.
We can now do the "right thing" with multiple return values.

llvm-svn: 48635
2008-03-21 06:50:21 +00:00
Christopher Lamb
b4f4b41048 Make insert_subreg a two-address instruction, vastly simplifying LowerSubregs pass. Add a new TII, subreg_to_reg, which is like insert_subreg except that it takes an immediate implicit value to insert into rather than a register.
llvm-svn: 48412
2008-03-16 03:12:01 +00:00
Christopher Lamb
0f1c32eb63 Get rid of a pseudo instruction and replace it with subreg based operation on real instructions, ridding the asm printers of the hack used to do this previously. In the process, update LowerSubregs to be careful about eliminating copies that have side affects.
Note: the coalescer will have to be careful about this too, when it starts coalescing insert_subreg nodes.
llvm-svn: 48329
2008-03-13 05:47:01 +00:00
Christopher Lamb
74f4d837df Recommitting parts of r48130. These do not appear to cause the observed failures.
llvm-svn: 48223
2008-03-11 10:09:17 +00:00
Chris Lattner
9826c9365e Change the model for FP Stack return to use fp operands on the
RET instruction instead of using FpSET_ST0_32.  This also generalizes
the code to handling returning of multiple FP results.

llvm-svn: 48209
2008-03-11 03:23:40 +00:00
Chris Lattner
f0684bfd16 Don't emit FP_REG_KILL into a block that just returns. Nothing
can be live out of the block anyway, so it isn't needed.

llvm-svn: 48192
2008-03-10 23:34:12 +00:00
Evan Cheng
067ecbc341 Revert 48125, 48126, and 48130 for now to unbreak some x86-64 tests.
llvm-svn: 48167
2008-03-10 19:31:26 +00:00
Christopher Lamb
32e5ce3d96 Allow insert_subreg into implicit, target-specific values.
Change insert/extract subreg instructions to be able to be used in TableGen patterns.
Use the above features to reimplement an x86-64 pseudo instruction as a pattern.

llvm-svn: 48130
2008-03-10 06:12:08 +00:00
Chris Lattner
826402e365 rename FpGETRESULT32 -> FpGET_ST0_32 etc. Add support for
isel'ing value preserving FP roundings from one fp stack reg to another
into a noop, instead of stack traffic.

llvm-svn: 48093
2008-03-09 07:05:32 +00:00
Evan Cheng
139517b682 Remove -always-fold-and-in-test.
llvm-svn: 47871
2008-03-04 00:40:35 +00:00
Evan Cheng
e1d3e0958b Set to default: x86 no longer fold and into test if it has more than one use.
llvm-svn: 47711
2008-02-28 07:46:38 +00:00
Dan Gohman
b8e7fea22f Revert the assert for MUL_LOHI with an unused high result; Chris
pointed out that this isn't correct at -O0.

llvm-svn: 47575
2008-02-25 22:43:48 +00:00
Dan Gohman
2585bc7c46 Add an assert to verify that we don't see an
{S,U}MUL_LOHI with an unused high value.

llvm-svn: 47569
2008-02-25 22:15:55 +00:00
Dan Gohman
1914fa6932 Remove the hack that turned an {S,U}MUL_LOHI with an unused high
result into a MUL late in the X86 codegen process. ISD::MUL is
once again Legal on X86, so this is no longer needed. And, the
hack was suboptimal; see PR1874 for details.

llvm-svn: 47567
2008-02-25 21:57:04 +00:00
Dan Gohman
012abf0109 Convert MaskedValueIsZero and all its users to use APInt. Also add
a SignBitIsZero function to simplify a common use case.

llvm-svn: 47561
2008-02-25 21:11:39 +00:00
Evan Cheng
f3a7cd1c62 Poorly named option.
llvm-svn: 47400
2008-02-20 20:57:32 +00:00
Evan Cheng
e9708c997f Disable for now. This is pessimizing code.
llvm-svn: 47354
2008-02-20 02:29:17 +00:00
Evan Cheng
35253f2c22 Add hidden option -x86-fold-and-in-test to test the effect the test / and folding change.
llvm-svn: 47351
2008-02-19 23:36:51 +00:00
Evan Cheng
e3ddcfa588 Only using x86-64 rip relative addressing in non-staic mode?
llvm-svn: 47019
2008-02-12 19:20:46 +00:00
Dan Gohman
cabaec582f Rename MRegisterInfo to TargetRegisterInfo.
llvm-svn: 46930
2008-02-10 18:45:23 +00:00
Evan Cheng
a377b2bbd1 Fix a x86-64 codegen deficiency. Allow gv + offset when using rip addressing mode.
Before:
_main:
        subq    $8, %rsp
        leaq    _X(%rip), %rax
        movsd   8(%rax), %xmm1
        movss   _X(%rip), %xmm0
        call    _t
        xorl    %ecx, %ecx
        movl    %ecx, %eax
        addq    $8, %rsp
        ret
Now:
_main:
        subq    $8, %rsp
        movsd   _X+8(%rip), %xmm1
        movss   _X(%rip), %xmm0
        call    _t
        xorl    %ecx, %ecx
        movl    %ecx, %eax
        addq    $8, %rsp
        ret

Notice there is another idiotic codegen issue that needs to be fixed asap:
xorl    %ecx, %ecx
movl    %ecx, %eax

llvm-svn: 46850
2008-02-07 08:53:49 +00:00
Evan Cheng
1c67dcaae7 Dwarf requires variable entries to be in the source order. Right now, since we are recording variable information at isel time this means parameters would appear in the reverse order. The short term fix is to issue recordVariable() at asm printing time instead.
llvm-svn: 46724
2008-02-04 23:06:48 +00:00
Evan Cheng
c57ec111f2 SDIsel processes llvm.dbg.declare by recording the variable debug information descriptor and its corresponding stack frame index in MachineModuleInfo. This only works if the local variable is "homed" in the stack frame. It does not work for byval parameter, etc.
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.

llvm-svn: 46659
2008-02-02 04:07:54 +00:00
Evan Cheng
618761903d Work in progress. This patch *fixes* x86-64 calls which are modelled as StructRet but really should be return in registers, e.g. _Complex long double, some 128-bit aggregates. This is a short term solution that is necessary only because llvm, for now, cannot model i128 nor call's with multiple results.
Status: This only works for direct calls, and only the caller side is done. Disabled for now.

llvm-svn: 46527
2008-01-29 19:34:22 +00:00
Chris Lattner
16a8f126d3 Significantly simplify and improve handling of FP function results on x86-32.
This case returns the value in ST(0) and then has to convert it to an SSE
register.  This causes significant codegen ugliness in some cases.  For 
example in the trivial fp-stack-direct-ret.ll testcase we used to generate:

_bar:
	subl	$28, %esp
	call	L_foo$stub
	fstpl	16(%esp)
	movsd	16(%esp), %xmm0
	movsd	%xmm0, 8(%esp)
	fldl	8(%esp)
	addl	$28, %esp
	ret

because we move the result of foo() into an XMM register, then have to
move it back for the return of bar.

Instead of hacking ever-more special cases into the call result lowering code
we take a much simpler approach: on x86-32, fp return is modeled as always 
returning into an f80 register which is then truncated to f32 or f64 as needed.
Similarly for a result, we model it as an extension to f80 + return.

This exposes the truncate and extensions to the dag combiner, allowing target
independent code to hack on them, eliminating them in this case.  This gives 
us this code for the example above:

_bar:
	subl	$12, %esp
	call	L_foo$stub
	addl	$12, %esp
	ret

The nasty aspect of this is that these conversions are not legal, but we want
the second pass of dag combiner (post-legalize) to be able to hack on them.
To handle this, we lie to legalize and say they are legal, then custom expand
them on entry to the isel pass (PreprocessForFPConvert).  This is gross, but
less gross than the code it is replacing :)

This also allows us to generate better code in several other cases.  For 
example on fp-stack-ret-conv.ll, we now generate:

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstps	8(%esp)
	movl	16(%esp), %eax
	cvtss2sd	8(%esp), %xmm0
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

where before we produced (incidentally, the old bad code is identical to what
gcc produces):

_test:
	subl	$12, %esp
	call	L_foo$stub
	fstpl	(%esp)
	cvtsd2ss	(%esp), %xmm0
	cvtss2sd	%xmm0, %xmm0
	movl	16(%esp), %eax
	movsd	%xmm0, (%eax)
	addl	$12, %esp
	ret

Note that we generate slightly worse code on pr1505b.ll due to a scheduling 
deficiency that is unrelated to this patch.

llvm-svn: 46307
2008-01-24 08:07:48 +00:00
Evan Cheng
165d39b8e4 Fix a x86-64 static codegen bug. This fixes a lot of x86-64 jit failures.
llvm-svn: 45733
2008-01-08 02:06:11 +00:00
Evan Cheng
5b9282f3b5 Combine MovePCtoStack + POP32r into one instruction MOVPC32r so it can be moved if needed.
llvm-svn: 45605
2008-01-05 00:41:47 +00:00
Chris Lattner
96167aa93c Rename SSARegMap -> MachineRegisterInfo in keeping with the idea
that "machine" classes are used to represent the current state of
the code being compiled.  Given this expanded name, we can start 
moving other stuff into it.  For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.

Update all the clients to match.

This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.

llvm-svn: 45467
2007-12-31 04:13:23 +00:00
Chris Lattner
ad9a6ccb83 Remove attribution from file headers, per discussion on llvmdev.
llvm-svn: 45418
2007-12-29 20:36:04 +00:00
Evan Cheng
8f4ec948d3 Fix JIT code emission of X86::MovePCtoStack.
llvm-svn: 45307
2007-12-22 02:26:46 +00:00
Evan Cheng
343929c773 Fold some and + shift in x86 addressing mode.
llvm-svn: 44970
2007-12-13 00:43:27 +00:00
Chris Lattner
12fca81026 aesthetic changes, no functionality change. Evan, it's not clear
what 'Available' is, please add a comment near it and rename it
if appropriate.

llvm-svn: 44703
2007-12-08 07:22:58 +00:00
Chris Lattner
be0c5a0500 Fix a long standing deficiency in the X86 backend: we would
sometimes emit "zero" and "all one" vectors multiple times,
for example:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M2
	ret

instead of:

_test2:
	pcmpeqd	%mm0, %mm0
	movq	%mm0, _M1
	movq	%mm0, _M2
	ret

This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type.  This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.

This patch makes the following changes:

1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
   their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
   immAllZerosV in the wrong form now use *_bc to match them with a
   bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle 
   bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
   is legal, instead of generating one that is illegal and expecting
   a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.

This patch is definite goodness, but has the potential to cause random
code quality regressions.  Please be on the lookout for these and let 
me know if they happen.

llvm-svn: 44310
2007-11-25 00:24:49 +00:00
Bill Wendling
df2eaa8a55 Silence, accersed warning
llvm-svn: 43609
2007-11-01 08:51:44 +00:00
Dan Gohman
76e104c8ad Fix the folding of multiplication into addresses on x86, which was broken
by the recent {U,S}MUL_LOHI changes.

llvm-svn: 43230
2007-10-22 20:22:24 +00:00
Evan Cheng
f1ead16fd5 Flag MOV32to32_ with EXTRACT_SUBREG. They should not be scheduled apart.
llvm-svn: 42894
2007-10-12 07:55:53 +00:00
Dan Gohman
cc317de0f5 Fix grammar in a comment.
llvm-svn: 42786
2007-10-09 15:44:37 +00:00
Dan Gohman
6df332f0cb Migrate X86 and ARM from using X86ISD::{,I}DIV and ARMISD::MULHILO{U,S} to
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.

llvm-svn: 42762
2007-10-08 18:33:35 +00:00
Anton Korobeynikov
ca03aec919 Partly revert invalid r41774
llvm-svn: 42322
2007-09-25 21:52:30 +00:00
Dan Gohman
1bb346f9f1 When both x/y and x%y are needed (x and y both scalar integer), compute
both results with a single div or idiv instruction. This uses new X86ISD
nodes for DIV and IDIV which are introduced during the legalize phase
so that the SelectionDAG's CSE can automatically eliminate redundant
computations.

llvm-svn: 42308
2007-09-25 18:23:27 +00:00
Dale Johannesen
5ea6a9bc3a When mixing SSE and x87 codegen, it's possible to
have situations where an SSE instruction turns into
multiple blocks, with the live range of an x87
register crossing them.  To do this correctly make
sure we examine all blocks when inserting
FP_REG_KILL.  PR 1697.  (This was exposed by my
fix for PR 1681, but the same thing could happen
mixing x87 long double with SSE.)

llvm-svn: 42281
2007-09-24 22:52:39 +00:00
Evan Cheng
65df926ced TableGen no longer emit CopyFromReg nodes for implicit results in physical
registers. The scheduler is now responsible for emitting them.

llvm-svn: 41781
2007-09-07 23:59:02 +00:00
Dale Johannesen
783215c630 Apply feedback from previous patch.
llvm-svn: 41774
2007-09-07 21:07:57 +00:00
Dale Johannesen
81d6ecb886 Enhance APFloat to retain bits of NaNs (fixes oggenc).
Use APFloat interfaces for more references, mostly
of ConstantFPSDNode.

llvm-svn: 41632
2007-08-31 04:03:46 +00:00
Dan Gohman
2390ff5060 When x86 addresses matching exceeds its recursion limit, check to
see if the base register is already occupied before assuming it can be
used. This fixes bogus code generation in the accompanying testcase.

llvm-svn: 41049
2007-08-13 20:03:06 +00:00
Christopher Lamb
7e52a97df5 Use subregs to improve any_extend code generation when feasible.
llvm-svn: 41013
2007-08-10 22:22:41 +00:00
Christopher Lamb
450f6815b9 Increase efficiency of sign_extend_inreg by using subregisters for truncation. As the README suggests sign_extend_subreg is selected to (sext(trunc)).
llvm-svn: 41010
2007-08-10 21:48:46 +00:00
Evan Cheng
a58ebc46dd divb / mulb outputs to ah. Under x86-64 it's not legal to read ah if the instruction requires a rex prefix (i.e. outputs to r8b, etc.). So issue shift right by 8 on AX and then truncate it to 8 bits instead.
llvm-svn: 40972
2007-08-09 21:59:35 +00:00
Dale Johannesen
6b8e91e7e3 Long double patch 8 of N: make it partially work in
SSE mode (all but conversions <-> other FP types, I think):
>>Do not mark all-80-bit operations as "Requires[FPStack]"
(which really means "not SSE").
>>Refactor load-and-extend to facilitate this.
>>Update comments.
>>Handle long double in SSE when computing FP_REG_KILL.

llvm-svn: 40906
2007-08-07 20:29:26 +00:00
Dale Johannesen
3ea9879011 Get X86 long double calling convention to work
(on Darwin, anyway).  Fix some table omissions for
LD arithmetic.

llvm-svn: 40877
2007-08-06 21:31:06 +00:00
Evan Cheng
3163814591 Switch some multiplication instructions over to the new scheme for testing.
llvm-svn: 40723
2007-08-02 05:48:35 +00:00
Evan Cheng
0fa6cdbff5 Mac OS X X86-64 low 4G address not available.
llvm-svn: 40701
2007-08-01 23:45:51 +00:00
Christopher Lamb
919ce03da6 Change the x86 backend to use extract_subreg for truncation operations. Passes DejaGnu, SingleSource and MultiSource.
llvm-svn: 40578
2007-07-29 01:24:57 +00:00
Evan Cheng
9802b13b38 Minor bug.
llvm-svn: 40535
2007-07-26 17:02:45 +00:00
Evan Cheng
413d222576 Same goes for constantpool, etc.
llvm-svn: 40517
2007-07-26 07:35:15 +00:00
Evan Cheng
9588231d34 Mac OS X x86-64 lower 4G address is not available.
llvm-svn: 40502
2007-07-25 23:41:36 +00:00
Dan Gohman
1444c5840b Add const to CanBeFoldedBy, CheckAndMask, and CheckOrMask.
llvm-svn: 40480
2007-07-24 23:00:27 +00:00
Dale Johannesen
7af19491d3 Fix for PR 1505 (and 1489). Rewrite X87 register
model to include f32 variants.  Some factoring
improvments forthcoming.

llvm-svn: 37847
2007-07-03 00:53:03 +00:00
Dan Gohman
a62327ea40 Move ComputeMaskedBits, MaskedValueIsZero, and ComputeNumSignBits from
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.

llvm-svn: 37704
2007-06-22 14:59:07 +00:00
Chris Lattner
b97b122176 Fix CodeGen/X86/2007-03-24-InlineAsmPModifier.ll
llvm-svn: 35926
2007-04-11 22:29:46 +00:00
Anton Korobeynikov
1a8740c88b Oops :)
llvm-svn: 35438
2007-03-28 18:38:33 +00:00
Anton Korobeynikov
d59c4e54c7 Don't allow MatchAddress recurse too much. This trims exponential
behaviour in some cases.

llvm-svn: 35437
2007-03-28 18:36:33 +00:00
Chris Lattner
b9cc0ade43 Two changes:
1) codegen a shift of a register as a shift, not an LEA.
2) teach the RA to convert a shift to an LEA instruction if it wants something
   in three-address form.

This gives us asm diffs like:

-       leal (,%eax,4), %eax
+       shll $2, %eax

which is faster on some processors and smaller on all of them.

and, more interestingly:

-       movl 24(%esi), %eax
-       leal (,%eax,4), %edi
+       movl 24(%esi), %edi
+       shll $2, %edi

Without #2, #1 was a significant pessimization in some cases.

This implements CodeGen/X86/shift-codegen.ll

llvm-svn: 35204
2007-03-20 06:08:29 +00:00
Chris Lattner
3e01807e87 Fix a miscompilation in the addr mode code trying to implement X | C and
X + C to promote LEA formation.  We would incorrectly apply it in some cases
(test) and miss it in others.

This fixes CodeGen/X86/2007-02-04-OrAddrMode.ll

llvm-svn: 33884
2007-02-04 20:18:17 +00:00
Evan Cheng
818c6bdfa2 Linux GOT indirect reference is only necessary in PIC mode.
llvm-svn: 33441
2007-01-22 21:34:25 +00:00
Reid Spencer
09efdecc2d Adjust #includes to compensate for lost of DerivedTypes.h in
TargetLowering.h

llvm-svn: 33154
2007-01-12 23:22:14 +00:00
Anton Korobeynikov
548b9af9c2 * PIC codegen for X86/Linux has been implemented
* PIC-aware internal structures in X86 Codegen have been refactored
* Visibility (default/weak) has been added
* Docs fixes (external weak linkage, visibility, formatting)

llvm-svn: 33136
2007-01-12 19:20:47 +00:00
Anton Korobeynikov
2b39939053 Really big cleanup.
- New target type "mingw" was introduced
- Same things for both mingw & cygwin are marked as "cygming" (as in
gcc)
- .lcomm is supported here, so allow LLVM to use it
- Correctly use underscored versions of setjmp & _longjmp for both mingw
& cygwin

llvm-svn: 32833
2007-01-03 11:43:14 +00:00
Chris Lattner
8896b6cb46 eliminate static ctors for Statistic objects.
llvm-svn: 32703
2006-12-19 22:59:26 +00:00
Evan Cheng
f5c9f4c3c9 Fix for PR1062 by Dan Gohman.
llvm-svn: 32688
2006-12-19 21:31:42 +00:00
Bill Wendling
f13d78d3b8 What should be the last unnecessary <iostream>s in the library.
llvm-svn: 32333
2006-12-07 22:21:48 +00:00
Chris Lattner
a531ce882e Detemplatize the Statistic class. The only type it is instantiated with
is 'unsigned'.

llvm-svn: 32279
2006-12-06 17:46:33 +00:00
Evan Cheng
40a5de9cd9 Revert an unintended change.
llvm-svn: 32239
2006-12-05 22:03:40 +00:00
Evan Cheng
adeea85f7d - Switch X86-64 JIT to large code size model.
- Re-enable some codegen niceties for X86-64 static relocation model codegen.
- Clean ups, etc.

llvm-svn: 32238
2006-12-05 19:50:18 +00:00
Evan Cheng
2c35691a02 - Fix X86-64 JIT by temporarily disabling code that treats GV address as 32-bit
immediate in small code model. The JIT cannot ensure GV's are placed in the
lower 4G.
- Some preliminary support for large code model.

llvm-svn: 32215
2006-12-05 04:01:03 +00:00
Evan Cheng
456101ebb9 - Use a different wrapper node for RIP-relative GV, etc.
- Proper support for both small static and PIC modes under X86-64
- Some (non-optimal) support for medium modes.

llvm-svn: 32046
2006-11-30 21:55:46 +00:00
Evan Cheng
1e3f41acde Clean up.
llvm-svn: 32027
2006-11-29 23:46:27 +00:00
Evan Cheng
7e20347607 Fix for PR1018 - Better support for X86-64 Linux in small code model.
llvm-svn: 32026
2006-11-29 23:19:46 +00:00
Evan Cheng
98fa7ab4d7 Change MachineInstr ctor's to take a TargetInstrDescriptor reference instead
of opcode and number of operands.

llvm-svn: 31947
2006-11-27 23:37:22 +00:00
Evan Cheng
a9176b38f9 For unsigned 8-bit division. Use movzbw to set the lower 8 bits of AX while
clearing the upper 8-bits instead of issuing two instructions. This also
eliminates the need to target the AH register which can be problematic on
x86-64.

llvm-svn: 31832
2006-11-17 22:10:14 +00:00
Bill Wendling
b6061e32fa Removed even more std::cerr and #include <iostream> things.
llvm-svn: 31813
2006-11-17 07:52:03 +00:00
Evan Cheng
0e82270ff2 Matches MachineInstr changes.
llvm-svn: 31712
2006-11-13 23:36:35 +00:00
Evan Cheng
b9e2ae9e37 Add implicit use / def operands to created MI's.
llvm-svn: 31676
2006-11-11 10:21:44 +00:00
Evan Cheng
f880ed86ff Add all implicit defs to FP_REG_KILL mi.
llvm-svn: 31674
2006-11-11 07:19:36 +00:00
Evan Cheng
3a017e8abd Fix a bug in SelectScalarSSELoad. Since the load is wrapped in a
SCALAR_TO_VECTOR, even if the hasOneUse() check pass we may end up folding
the load into two instructions. Make sure we check the SCALAR_TO_VECTOR
has only one use as well.

llvm-svn: 31641
2006-11-10 21:23:04 +00:00
Evan Cheng
736a8eb3cd Match tblegen changes.
llvm-svn: 31571
2006-11-08 20:34:28 +00:00
Jeff Cohen
e1003da1a2 Unbreak VC++ build.
llvm-svn: 31464
2006-11-05 19:31:28 +00:00
Chris Lattner
24e8fdc1f6 silence warning
llvm-svn: 31393
2006-11-03 01:13:15 +00:00
Evan Cheng
c73547a71d SelectScalarSSELoad should call CanBeFoldedBy as well.
llvm-svn: 30973
2006-10-16 06:34:55 +00:00
Evan Cheng
6c8de88f88 Corrected load folding check. We need to start from the root of the sub-dag
being matched and ensure there isn't a non-direct path to the load (i.e. a
path that goes out of the sub-dag.)

llvm-svn: 30958
2006-10-14 08:33:25 +00:00
Evan Cheng
fe5bb5dbe6 Merge ISD::TRUNCSTORE to ISD::STORE. Switch to using StoreSDNode.
llvm-svn: 30945
2006-10-13 21:14:26 +00:00