1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 12:33:33 +02:00
Commit Graph

843 Commits

Author SHA1 Message Date
Bill Wendling
c9e856fbfd Add support for non-zero __builtin_return_address values on X86.
llvm-svn: 62338
2009-01-16 19:25:27 +00:00
Mon P Wang
e235ba591f Added missing support to widen an operand from a bit convert.
llvm-svn: 62285
2009-01-15 22:43:38 +00:00
Mon P Wang
4cfe965df2 Expand insert/extract of a <4 x i32> with a variable index.
llvm-svn: 62281
2009-01-15 21:10:20 +00:00
Rafael Espindola
0aba6c9435 Add the private linkage.
llvm-svn: 62279
2009-01-15 20:18:42 +00:00
Dan Gohman
8c835f6285 Disable the register+memory forms of the bt instructions for now. Thanks
to Eli for pointing out that these forms don't ignore the high bits of
their index operands, and as such are not immediately suitable for use
by isel.

llvm-svn: 62194
2009-01-13 23:23:30 +00:00
Duncan Sands
975f2428ba When replacing uses and the same node is reached
via two paths, process it once not twice, d'oh!
Analysis, testcase and original patch thanks to
Mon Ping Wang.

llvm-svn: 62169
2009-01-13 15:17:14 +00:00
Evan Cheng
a706a020bc FIX llvm-gcc bootstrap on x86_64 linux. If a virtual register is copied to a physical register, it's not necessarily defined by a copy. We have to watch out it doesn't clobber any sub-register that might be live during its live interval. If the live interval crosses a basic block, then it's not safe to check with the less conservative check (by scanning uses and defs) because it's possible a sub-register might be live out of the block.
llvm-svn: 62144
2009-01-13 03:57:45 +00:00
Devang Patel
6d7fd4b913 Use DebugInfo interface to lower dbg_* intrinsics.
llvm-svn: 62126
2009-01-13 00:32:17 +00:00
Evan Cheng
5e17ea36e1 Fix PR3241: Currently EmitCopyFromReg emits a copy from the physical register to a virtual register unless it requires an expensive cross class copy. That means we are only treating "expensive to copy" register dependency as physical register dependency.
Also future proof the scheduler to handle "normal" physical register dependencies. The code is not exercised yet.

llvm-svn: 62074
2009-01-12 03:19:55 +00:00
Evan Cheng
47dfb8c719 This is a dup of pr2659.ll.
llvm-svn: 62029
2009-01-10 19:06:32 +00:00
Evan Cheng
411c48b7d2 Duplicated node may produce a non-physical register def.
llvm-svn: 62015
2009-01-09 22:44:02 +00:00
Evan Cheng
84945aba0b Add test case from PR2659.
llvm-svn: 62006
2009-01-09 21:01:31 +00:00
Dan Gohman
a707c0dafe PR2659 was fixed by r61847. Add the testcase as a regression test.
llvm-svn: 61986
2009-01-09 08:16:12 +00:00
Evan Cheng
a70ecc2f51 The coalescer does not coalesce a virtual register to a physical register if any of the physical register's sub-register live intervals overlaps with the virtual register. This is overly conservative. It prevents a extract_subreg from being coalesced away:
v1024 = EDI  // not killed
      =
      = EDI

One possible solution is for the coalescer to examine the sub-register live intervals in the same manner as the physical register. Another possibility is to examine defs and uses (when needed) of sub-registers. Both solutions are too expensive. For now, look for "short virtual intervals" and scan instructions to look for conflict instead.

This is a small win on x86-64. e.g. It shaves 403.gcc by ~80 instructions.

llvm-svn: 61847
2009-01-07 02:08:57 +00:00
Chris Lattner
f6de7aa2c9 add a testcase.
llvm-svn: 61845
2009-01-07 01:48:08 +00:00
Dan Gohman
ca4475dd7b Add patterns to match conditional moves with loads folded
into their left operand, rather than their right. Do this
by commuting the operands and inverting the condition.

llvm-svn: 61842
2009-01-07 01:00:24 +00:00
Dan Gohman
2682e8745c X86_COND_C and X86_COND_NC are alternate mnemonics for
X86_COND_B and X86_COND_AE, respectively.

llvm-svn: 61835
2009-01-07 00:15:08 +00:00
Dan Gohman
4edc9d725b Now that fold-pcmpeqd-0.ll is effectively testing that scheduling helps
avoid the need for spilling, add a new testcase that tests that the
pcmpeqd used for V_SETALLONES is changed to a constant-pool load as
needed.

llvm-svn: 61831
2009-01-06 23:48:10 +00:00
Dan Gohman
e033f7c41e Revert r42653 and forward-port the code that lets INC64_32r be
converted to LEA64_32r in x86's convertToThreeAddress. This
replaces code like this:
   movl  %esi, %edi
   inc   %edi
with this:
   lea   1(%rsi), %edi
which appears to be beneficial.

llvm-svn: 61830
2009-01-06 23:34:46 +00:00
Dan Gohman
1cdb677fc8 Use a latency value of 0 for the artificial edges inserted by
AddPseudoTwoAddrDeps. This lets the scheduling infrastructure
avoid recalculating node heights. In very large testcases this
was a major bottleneck. Thanks to Roman Levenstein for finding
this!

As a side effect, fold-pcmpeqd-0.ll is now scheduled better
and it no longer requires spilling on x86-32.

llvm-svn: 61778
2009-01-06 01:19:04 +00:00
Evan Cheng
36e238a4d3 Find loop back edges only after empty blocks are eliminated.
llvm-svn: 61752
2009-01-05 21:17:27 +00:00
Dan Gohman
2a079de3f5 Fix a DAGCombiner abort on an invalid shift count constant. This fixes PR3250.
llvm-svn: 61613
2009-01-03 19:22:06 +00:00
Evan Cheng
c52f942d67 Do not isel load folding bt instructions for pentium m, core, core2, and AMD processors. These are significantly slower than a load followed by a bt of a register.
llvm-svn: 61557
2009-01-02 05:35:45 +00:00
Evan Cheng
57115c1887 Use movaps / movd to extract vector element 0 even with sse4.1. It's still cheaper than pextrw especially if the value is in memory.
llvm-svn: 61555
2009-01-02 05:29:08 +00:00
Chris Lattner
cd245cc5c3 add PR #
llvm-svn: 61427
2008-12-25 05:40:38 +00:00
Chris Lattner
fde038935b Add a simple pattern for matching 'bt'.
llvm-svn: 61426
2008-12-25 05:34:37 +00:00
Dan Gohman
7ff343fe6c Fix a compiler-abort on a testcase where the stack-pointer is added to
a symbolic constant. This is unlikely to be intentional, but it
shouldn't crash the compiler.

llvm-svn: 61408
2008-12-24 00:27:51 +00:00
Dale Johannesen
e1a3d2da49 Add another permutation where we should get rid of a-a.
llvm-svn: 61401
2008-12-23 23:01:27 +00:00
Mon P Wang
993de36832 Added shuffle and splat test cases for r61365.
llvm-svn: 61366
2008-12-23 04:05:08 +00:00
Dale Johannesen
425b44516f One more permutation of subtracting off a base value.
llvm-svn: 61361
2008-12-23 01:59:54 +00:00
Dan Gohman
c9f244842c Fix fast-isel to not emit invalid assembly when presented with a
constant shift count that doesn't fit in the shift instruction's
immediate field. This fixes PR3242.

llvm-svn: 61281
2008-12-20 17:19:40 +00:00
Dan Gohman
8c5bea15ca Use the correct Preds and Succs lists in setHeightDirty()
and setDepthDirty(), respectively. This fixes PR3241.

llvm-svn: 61276
2008-12-20 16:34:57 +00:00
Evan Cheng
da55c4ffb7 Fix PR3149. If an early clobber def is a physical register and it is tied to an input operand, it effectively extends the live range of the physical register. Currently we do not have a good way to represent this.
172     %ECX<def> = MOV32rr %reg1039<kill>
180     INLINEASM <es:subl $5,$1
        sbbl $3,$0>, 10, %EAX<def>, 14, %ECX<earlyclobber,def>, 9, %EAX<kill>,
36, <fi#0>, 1, %reg0, 0, 9, %ECX<kill>, 36, <fi#1>, 1, %reg0, 0
188     %EAX<def> = MOV32rr %EAX<kill>
196     %ECX<def> = MOV32rr %ECX<kill>
204     %ECX<def> = MOV32rr %ECX<kill>
212     %EAX<def> = MOV32rr %EAX<kill>
220     %EAX<def> = MOV32rr %EAX
228     %reg1039<def> = MOV32rr %ECX<kill>

The early clobber operand ties ECX input to the ECX def.

The live interval of ECX is represented as this:
%reg20,inf = [46,47:1)[174,230:0)  0@174-(230) 1@46-(47)

The right way to represent this is something like
%reg20,inf = [46,47:2)[174,182:1)[181:230:0)  0@174-(182) 1@181-230 @2@46-(47)

Of course that won't work since that means overlapping live ranges defined by two val#.

The workaround for now is to add a bit to val# which says the val# is redefined by a early clobber def somewhere. This prevents the move at 228 from being optimized away by SimpleRegisterCoalescing::AdjustCopiesBackFrom.

llvm-svn: 61259
2008-12-19 20:58:01 +00:00
Evan Cheng
17b53ef5b0 - CodeGenPrepare does not split loop back edges but it only knows about back edges of single block loops. It now does a DFS walk to find loop back edges.
- Use SplitBlockPredecessors to factor out common predecessors of the critical edge destination. This is disabled for now due to some regressions.

llvm-svn: 61248
2008-12-19 18:03:11 +00:00
Rafael Espindola
7593f0004f Fix bug 3202.
The EH_frame and .eh symbols are now private, except for darwin9 and earlier.
The patch also fixes the definition of PrivateGlobalPrefix on pcc linux.

llvm-svn: 61242
2008-12-19 10:55:56 +00:00
Mon P Wang
4e9022582d Fix test to account for generating some vector code for mul v2i64 instead
of incorrectly generating pmuldq

llvm-svn: 61228
2008-12-18 23:42:37 +00:00
Mon P Wang
5b74ab1f1e Added some basic test cases for r61209
llvm-svn: 61210
2008-12-18 20:05:58 +00:00
Eli Friedman
4aae828bf8 Fix for PR3225: disable a broken optimization in
DAGTypeLegalizer::ExpandShiftWithKnownAmountBit.

In terms of restoring the optimization, the best fix here isn't 
obvious... any ideas?

llvm-svn: 61119
2008-12-17 03:35:17 +00:00
Dale Johannesen
e348900657 A new dag combine; several permutations of this
are there under ADD, this one was missing.

llvm-svn: 61107
2008-12-16 22:13:49 +00:00
Dan Gohman
10eb3ccaeb Enable anti-dependence breaking by default when post-RA scheduling is enabled.
llvm-svn: 61078
2008-12-16 06:21:45 +00:00
Dan Gohman
40a40dd7c1 Fix some register-alias-related bugs in the post-RA scheduler liveness
computation code. Also, avoid adding output-depenency edges when both
defs are dead, which frequently happens with EFLAGS defs.

Compute Depth and Height lazily, and always in terms of edge latency
values. For the schedulers that don't care about latency, edge latencies
are set to 1.

Eliminate Cycle and CycleBound, and LatencyPriorityQueue's Latencies array.
These are all subsumed by the Depth and Height fields.

llvm-svn: 61073
2008-12-16 03:25:46 +00:00
Mon P Wang
bb3c2994f0 Added support for splitting and scalarizing vector shifts.
llvm-svn: 61050
2008-12-15 21:44:00 +00:00
Mon P Wang
2f96113348 Added support to LegalizeType for expanding the operands of scalar to vector
and insert vector element.  Modified extract vector element to extend the
result to match the expected promoted type.

llvm-svn: 61029
2008-12-15 06:57:02 +00:00
Bill Wendling
13e4a3d0b0 - Use patterns instead of creating completely new instruction matching patterns,
which are identical to the original patterns.

- Change the multiply with overflow so that we distinguish between signed and
  unsigned multiplication. Currently, unsigned multiplication with overflow
  isn't working!

llvm-svn: 60963
2008-12-12 21:15:41 +00:00
Bill Wendling
292263313b If ADD, SUB, or MUL have an overflow bit that's used, don't do transformation on
them. The DAG combiner expects that nodes that are transformed have one value
result.

llvm-svn: 60857
2008-12-10 22:36:00 +00:00
Mon P Wang
308879dcfc Fixed a bug when trying to optimize a extract vector element of a
bit convert that changes the number of elements of a shuffle.

llvm-svn: 60829
2008-12-10 03:59:02 +00:00
Bill Wendling
1c1dacdd42 Implement fast-isel conversion of a branch instruction that's branching on an
overflow/carry from the "arithmetic with overflow" intrinsics. It searches the
machine basic block from bottom to top to find the SETO/SETC instruction that is
its conditional. If an instruction modifies EFLAGS before it reaches the
SETO/SETC instruction, then it defaults to the normal instruction emission.

llvm-svn: 60807
2008-12-09 23:19:12 +00:00
Bill Wendling
4c8fb3a0cc Add sub/mul overflow intrinsics. This currently doesn't have a
target-independent way of determining overflow on multiplication. It's very
tricky. Patch by Zoltan Varga!

llvm-svn: 60800
2008-12-09 22:08:41 +00:00
Duncan Sands
88a2901801 Fix PR3117: not all nodes being legalized. The
essential problem was that the DAG can contain
random unused nodes which were never analyzed.
When remapping a value of a node being processed,
such a node may become used and need to be analyzed;
however due to operands being transformed during
analysis the node may morph into a different one.
Users of the morphing node need to be updated, and
this wasn't happening.  While there I added a bunch
of documentation and sanity checks, so I (or some
other poor soul) won't have to scratch their head
over this stuff so long trying to remember how it
was all supposed to work next time some obscure
problem pops up!  The extra sanity checking exposed
a few places where invariants weren't being preserved,
so those are fixed too.  Since some of the sanity
checking is expensive, I added a flag to turn it
on.  It is also turned on when building with
ENABLE_EXPENSIVE_CHECKS=1.

llvm-svn: 60797
2008-12-09 21:33:20 +00:00
Mon P Wang
0c011f8ba9 Fix getNode to allow a vector for the shift amount for shifts of vectors.
Fix the shift amount when unrolling a vector shift into scalar shifts.
Fix problem in getShuffleScalarElt where it assumes that the input of
a bit convert must be a vector.

llvm-svn: 60740
2008-12-09 05:46:39 +00:00