1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 11:42:57 +01:00
Commit Graph

79361 Commits

Author SHA1 Message Date
Craig Topper
c9756440ea Instruction selection priority fixes to remove the XMM/XMMInt/orAVX predicates. Another commit will remove orAVX functions from X86SubTarget.
llvm-svn: 147841
2012-01-10 06:30:56 +00:00
Evan Cheng
7855c5d08f Allow machine-cse to look across MBB boundary when cse'ing instructions that
define physical registers. It's currently very restrictive, only catching
cases where the CE is in an immediate (and only) predecessor. But it catches
a surprising large number of cases.

rdar://10660865

llvm-svn: 147827
2012-01-10 02:02:58 +00:00
Andrew Trick
db66631fb3 Enable LSR IV Chains with sufficient heuristics.
These heuristics are sufficient for enabling IV chains by
default. Performance analysis has been done for i386, x86_64, and
thumbv7. The optimization is rarely important, but can significantly
speed up certain cases by eliminating spill code within the
loop. Unrolled loops are prime candidates for IV chains. In many
cases, the final code could still be improved with more target
specific optimization following LSR. The goal of this feature is for
LSR to make the best choice of induction variables.

Instruction selection may not completely take advantage of this
feature yet. As a result, there could be cases of slight code size
increase.

Code size can be worse on x86 because it doesn't support postincrement
addressing. In fact, when chains are formed, you may see redundant
address plus stride addition in the addressing mode. GenerateIVChains
tries to compensate for the common cases.

On ARM, code size increase can be mitigated by using postincrement
addressing, but downstream codegen currently misses some opportunities.

llvm-svn: 147826
2012-01-10 01:45:08 +00:00
Jakob Stoklund Olesen
f9de299af4 Accurately model hardware alignment rounding.
On Thumb, the displacement computation hardware uses the address of the
current instruction rouned down to a multiple of 4.  Include this
rounding in the UserOffset we compute for each instruction.

When inline asm is present, the instruction alignment may not be known.
Constrain the maximum displacement instead in that case.

This makes it possible for CreateNewWater() and OffsetIsInRange() to
agree about the valid displacements.  When they disagree, infinite
looping happens.

As always, test cases for this stuff are insane.

<rdar://problem/10660175>

llvm-svn: 147825
2012-01-10 01:34:59 +00:00
Rafael Espindola
6b018e0b1d Remove the logging streamer.
llvm-svn: 147820
2012-01-10 00:40:39 +00:00
Jakob Stoklund Olesen
eb6540fdbf Catch runaway ARMConstantIslandPass even in -Asserts builds.
The pass is prone to looping, and it is better to crash than loop
forever, even in a -Asserts build.

<rdar://problem/10660175>

llvm-svn: 147806
2012-01-09 22:16:24 +00:00
Devang Patel
88a31798bc Fix asm string wrt variants.
llvm-svn: 147805
2012-01-09 21:32:02 +00:00
Devang Patel
77ab1ea721 Use descriptive variable name and remove incorrect operand number check.
llvm-svn: 147802
2012-01-09 21:30:46 +00:00
Andrew Trick
09d73ea35b Adding IV chain generation to LSR.
After collecting chains, check if any should be materialized. If so,
hide the chained IV users from the LSR solver. LSR will only solve for
the head of the chain. GenerateIVChains will then materialize the
chained IV users by computing the IV relative to its previous value in
the chain.

In theory, chained IV users could be exposed to LSR's solver. This
would be considerably complicated to implement and I'm not aware of a
case where we need it. In practice it's more important to
intelligently prune the search space of nontrivial loops before
running the solver, otherwise the solver is often forced to prune the
most optimal solutions. Hiding the chained users does this well, so
that LSR is more likely to find the best IV for the chain as a whole.

llvm-svn: 147801
2012-01-09 21:18:52 +00:00
Andrew Trick
b6ee006eaf Adding collection of IV chains to LSR.
This collects a set of IV uses within the loop whose values can be
computed relative to each other in a sequence. Following checkins will
make use of this information.

llvm-svn: 147797
2012-01-09 19:50:34 +00:00
Devang Patel
921a16318d Split AsmParser into two components - AsmParser and AsmParserVariant
AsmParser holds info specific to target parser.
AsmParserVariant holds info specific to asm variants supported by the target.

llvm-svn: 147787
2012-01-09 19:13:28 +00:00
Andrew Trick
b442611358 "Minor LSR debugging stuff"
llvm-svn: 147785
2012-01-09 18:58:16 +00:00
Devang Patel
bcde69f19c Update language check. Do not ignore DW_LANG_Python.
Patch by  Joe Groff!

llvm-svn: 147781
2012-01-09 17:49:47 +00:00
Benjamin Kramer
48d318717f Move assert to the right place.
llvm-svn: 147779
2012-01-09 17:36:29 +00:00
Benjamin Kramer
f9cefbfed0 InstCombine: Teach foldLogOpOfMaskedICmpsHelper that sign bit tests are bit tests.
This subsumes several other transforms while enabling us to catch more cases.

llvm-svn: 147777
2012-01-09 17:23:27 +00:00
Chandler Carruth
f762ca322b Don't rely on the fact that shift values are never very large, and thus
this substraction will result in small negative numbers at worst which
become very large positive numbers on assignment and are thus caught by
the <=4 check on the next line. The >0 check clearly intended to catch
these as negative numbers.

Spotted by inspection, and impossible to trigger given the shift widths
that can be used.

llvm-svn: 147773
2012-01-09 09:47:25 +00:00
Chandler Carruth
9a418a2713 Cleanup and FileCheck-ize a test.
llvm-svn: 147772
2012-01-09 09:44:26 +00:00
Craig Topper
be3b3744b8 Remove AVX hack in X86Subtarget. AVX/AVX2 are now treated as an SSE level. Predicate functions have been altered to maintain previous names and behavior.
llvm-svn: 147770
2012-01-09 09:02:13 +00:00
Craig Topper
915bc4edaa Add HasAVX predicate to some of the AVX patterns.
llvm-svn: 147769
2012-01-09 08:34:00 +00:00
Craig Topper
2710d2dadb Reorder a bunch of patterns to put the AVX version first thus giving it priority over the SSE version. Another step towards trying to remove the AVX hack that disables SSE from X86Subtarget.
llvm-svn: 147768
2012-01-09 08:10:38 +00:00
Craig Topper
ee2dabebe3 Clean up patterns for MOVNT*. Not sure why there were floating point types on MOVNTPS and MOVNTDQ. And v4i64 was completely missing.
llvm-svn: 147767
2012-01-09 06:52:46 +00:00
Craig Topper
8ce58f4687 Mark MOVNTI as being supported in SSE2 OR AVX mode. This instruction has no AVX equivalent so we should use the SSE version.
llvm-svn: 147766
2012-01-09 06:38:55 +00:00
Craig Topper
bd6f3004ba Move SSE2 logical operations PAND/POR/PXOR/PANDN above SSE1 logical operations ANDPS/ORPS/XORPS/ANDNPS. This fixes a pattern ordering issue that meant that the SSE2 instructions could never be directly selected since the SSE1 patterns would always match first. This is largely moot with the ExeDepsFix pass, but I'm trying to audit for all such ordering issues.
llvm-svn: 147765
2012-01-09 05:07:01 +00:00
Craig Topper
dc81848e5b Change some places that were checking for AVX OR SSE1/2 to use hasXMM/hasXMMInt instead. Also fix one place that checked SSE3, but accidentally excluded AVX to use hasSSE3orAVX. This is a step towards removing the AVX hack from the X86Subtarget.h
llvm-svn: 147764
2012-01-09 02:28:15 +00:00
Rafael Espindola
7618aa1c64 Don't print an unused label before .cfi_endproc.
llvm-svn: 147763
2012-01-09 00:17:29 +00:00
Craig Topper
1fdcf7071d Don't disable MMX support when AVX is enabled. Fix predicates for MMX instructions that were added along with SSE instructions to check for AVX in addition to SSE level.
llvm-svn: 147762
2012-01-09 00:11:29 +00:00
Craig Topper
3fb8201fcb Enable FISTTP* instructions when AVX is enabled.
llvm-svn: 147758
2012-01-08 23:04:21 +00:00
Benjamin Kramer
e1321329f4 Tweak my last commit to be less conservative about uses.
We still save an instruction when just the "and" part is replaced.
Also change the code to match comments more closely.

llvm-svn: 147753
2012-01-08 21:12:51 +00:00
Evan Cheng
6a08916ce2 Don't forget to transfer implicit uses of return instruction.
llvm-svn: 147752
2012-01-08 20:41:16 +00:00
Evan Cheng
e958ca7b2e Avoid eraseing copies from a reserved register unless the definition can be
safely proven not to have been clobbered. No small test case possible.

llvm-svn: 147751
2012-01-08 19:52:28 +00:00
Benjamin Kramer
e94856c8c4 InstCombine: If we have a bit test and a sign test anded/ored together, merge the sign bit into the bit test.
This is common in bit field code, e.g. checking if the first or the last bit of a bit field is set.

llvm-svn: 147749
2012-01-08 18:32:24 +00:00
Victor Umansky
5d24f5f51a Reverted commit #147601 upon Evan's request.
llvm-svn: 147748
2012-01-08 17:20:33 +00:00
Rafael Espindola
b97772f85c Remove MCELFStreamer.h.
llvm-svn: 147745
2012-01-07 23:18:39 +00:00
Rafael Espindola
19a13321f8 Don't print a label before .cfi_startproc when we don't need to. This makes
the produce assembly when using CFI just a bit more readable.

llvm-svn: 147743
2012-01-07 22:42:19 +00:00
Benjamin Kramer
4c48b2024d Make clever use of alignment and padding to shrink GlobalValue.
-8 bytes on x86_64, no change on x86.

llvm-svn: 147742
2012-01-07 21:17:16 +00:00
Jakob Stoklund Olesen
8db266fafb Match SelectionDAG logic for enabling movt.
Darwin doesn't do static, and ELF targets only support static.

llvm-svn: 147740
2012-01-07 20:49:15 +00:00
Craig Topper
260a034757 Fix typo in the X86 backend readme. Patch from Jaeden Amero.
llvm-svn: 147739
2012-01-07 20:35:21 +00:00
Benjamin Kramer
0ce9fd3032 Remove VectorExtras. This unused helper was written for a type of API that is discouraged now.
llvm-svn: 147738
2012-01-07 19:42:13 +00:00
Craig Topper
17901c2d33 Remove unnecessary check of hasAVX(). It's already included in hasXMM().
llvm-svn: 147734
2012-01-07 18:48:43 +00:00
Craig Topper
66a349f55d Replace some uses of hasNUsesOfValue(0, X) with !hasAnyUseOfValue(X)
llvm-svn: 147733
2012-01-07 18:31:09 +00:00
Benjamin Kramer
b13cdd4879 Port the trick to skip the check for empty buckets from StringMap to DenseMap.
This should fix the odd behavior that find() is slower than lookup().

llvm-svn: 147731
2012-01-07 13:12:07 +00:00
Craig Topper
8648954580 Add some DAG combines for SUBC/SUBE. If nothing uses the carry/borrow out of subc, turn it into a sub. Turn (subc x, x) into 0 with no borrow. Turn (subc x, 0) into x with no borrow. Turn (subc -1, x) into (xor x, -1) with no borrow. Turn sube with no borrow in into subc.
llvm-svn: 147728
2012-01-07 09:06:39 +00:00
Cameron Zwarich
49330adc9c Fix TableGen so that it will emit the correct signature for FastEmit_f:
/// FastEmit_f - This method is called by target-independent code
  /// to request that an instruction with the given type, opcode, and
  /// floating-point immediate operand be emitted.
  virtual unsigned FastEmit_f(MVT VT,
                              MVT RetVT,
                              unsigned Opcode,
                              const ConstantFP *FPImm);

Currently, it emits an accidentally overloaded version without the const on the
ConstantFP*. This doesn't affect anything in the tree, since nothing causes that
method to be autogenerated, but I have been playing with some ARM TableGen
refactorings that hit this problem.

llvm-svn: 147727
2012-01-07 08:18:37 +00:00
Jakob Stoklund Olesen
b6f6f2baa9 Optimize reserved register coalescing.
Reserved registers don't have proper live ranges, their LiveInterval
simply has a snippet of liveness for each def.  Virtual registers with a
single value that is a copy of a reserved register (typically %esp) can
be coalesced with the reserved register if the live range doesn't
overlap any reserved register defs.

When coalescing with a reserved register, don't modify the reserved
register live range.  Just leave it as a bunch of dead defs.  This
eliminates quadratic coalescer behavior in i386 functions with many
function calls.

PR11699

llvm-svn: 147726
2012-01-07 07:39:50 +00:00
Jakob Stoklund Olesen
3221d31706 Use the 'regalloc' debug tag for most register allocator tracing.
llvm-svn: 147725
2012-01-07 07:39:47 +00:00
Andrew Trick
60dbff489b Enable redundant phi elimination after LSR.
This will be more important as we extend the LSR pass in ways that don't rely on the formula solver. In particular, we need it for constructing IV chains.

llvm-svn: 147724
2012-01-07 07:08:17 +00:00
Eli Bendersky
af558b888a Fix dead link
llvm-svn: 147721
2012-01-07 04:11:27 +00:00
Jakob Stoklund Olesen
71f92061aa Use getRegForValue() to materialize the address of ARM globals.
This enables basic local CSE, giving us 20% smaller code for
consumer-typeset in -O0 builds.

<rdar://problem/10658692>

llvm-svn: 147720
2012-01-07 04:07:22 +00:00
Evan Cheng
3c2cf59a22 Revert part of r147716. Looks like x87 instructions kill markers are all messed
up so branch folding pass can't use the scavenger. :-(  This doesn't breaks
anything currently. It just means targets which do not carefully update kill
markers cannot run post-ra scheduler (not new, it has always been the case).

We should fix this at some point since it's really hacky.

llvm-svn: 147719
2012-01-07 03:35:48 +00:00
Andrew Trick
d9eb9c8780 LSR: Don't optimize loops if an outer loop has no preheader.
LoopSimplify may not run on some outer loops, e.g. because of indirect
branches. SCEVExpander simply cannot handle outer loops with no preheaders.
Fixes rdar://10655343 SCEVExpander segfault.

llvm-svn: 147718
2012-01-07 03:16:50 +00:00