1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 12:02:58 +02:00
Commit Graph

20395 Commits

Author SHA1 Message Date
Rafael Espindola
c326212da6 Add big endian mips support. Based on a patch by Jack Carter.
llvm-svn: 147924
2012-01-11 04:04:14 +00:00
Rafael Espindola
fff89417f5 Add the skeleton of an asm parser for mips.
llvm-svn: 147923
2012-01-11 03:56:41 +00:00
Andrew Trick
393dc735f6 ARM Ld/St Optimizer fix.
Allow LDRD to be formed from pairs with different LDR encodings. This was the original intention of the pass. Somewhere along the way, the LDR opcodes were refined which broke the optimization. We really don't care what the original opcodes are as long as they both map to the same LDRD and the immediate still fits.

Fixes rdar://10435045 ARMLoadStoreOptimization cannot handle mixed LDRi8/LDRi12

llvm-svn: 147922
2012-01-11 03:56:08 +00:00
Lang Hames
11ff139f8f Fixed order of operands in comment to match code.
llvm-svn: 147890
2012-01-10 22:53:20 +00:00
Joerg Sonnenberger
fccad68f65 Default stack alignment for 32bit x86 should be 4 Bytes, not 8 Bytes.
Add a test that checks the stack alignment of a simple function for
Darwin, Linux and NetBSD for 32bit and 64bit mode.

llvm-svn: 147888
2012-01-10 22:43:53 +00:00
Jakob Stoklund Olesen
7f7f8a2e77 Consider unknown alignment caused by OptimizeThumb2Instructions().
This function runs after all constant islands have been placed, and may
shrink some instructions to their 2-byte forms.  This can actually cause
some constant pool entries to move out of range because of growing
alignment padding.

Treat instructions that may be shrunk the same as inline asm - they
erode the known alignment bits.

Also reinstate an old assertion in verify(). It is correct now that
basic block offsets include alignments.

Add a single large test case that will hopefully exercise many parts of
the constant island pass.

<rdar://problem/10670199>

llvm-svn: 147885
2012-01-10 22:32:14 +00:00
Chad Rosier
7bab07a5f1 Add missing VEX predicates to VMOVSDto64rr/VMOVSDto64mr. This fixes a few
failing test cases on our internal AVX nightly tester.
rdar://10663637

llvm-svn: 147881
2012-01-10 22:14:06 +00:00
Jim Grosbach
59537e1ce3 ARM updating VST2 pseudo-lowering fixed vs. register update.
rdar://10663487

llvm-svn: 147876
2012-01-10 21:11:12 +00:00
Benjamin Kramer
94637ab6c6 Fix some leftover control reaches end of non-void function warnings.
llvm-svn: 147874
2012-01-10 20:47:20 +00:00
Richard Smith
03de404e6f Move default case for covered enum outside of switch.
llvm-svn: 147870
2012-01-10 19:43:09 +00:00
Bill Wendling
4a4be29bb5 For i386, don't use the generic code.
As the comment around 7746 says, it's better to use the x87 extended precision
here than SSE. And the generic code doesn't know how to do that. It also regains
the speed lost for the uint64_to_float.c testcase.
<rdar://problem/10669858>

llvm-svn: 147869
2012-01-10 19:41:30 +00:00
Richard Smith
9cf5d2b6f5 Fix a -Wreturn-type warning in g++.
llvm-svn: 147867
2012-01-10 19:10:22 +00:00
Chandler Carruth
2a6b59a693 Add 'llvm_unreachable' to passify GCC's understanding of the constraints
of several newly un-defaulted switches. This also helps optimizers
(including LLVM's) recognize that every case is covered, and we should
assume as much.

llvm-svn: 147861
2012-01-10 18:08:01 +00:00
Devang Patel
9a08a580a7 Add definition for intel asm variant.
Right now, this just adds additional entries in match table. The parser does not use them yet.

llvm-svn: 147859
2012-01-10 17:51:54 +00:00
David Blaikie
8d47bb30e3 Remove unnecessary default cases in switches that cover all enum values.
llvm-svn: 147855
2012-01-10 16:47:17 +00:00
Benjamin Kramer
1f1a76c7af Add definitions for AMD's bobcat (aka btver1)
llvm-svn: 147846
2012-01-10 11:50:02 +00:00
Craig Topper
01eba20904 Fix a crash in AVX2 when trying to broadcast a double into a 128-bit vector. There is no vbroadcastsd xmm, but we do need to support 64-bit integers broadcasted into xmm. Also factor the AVX check into the isVectorBroadcast function. This makes more sense since the AVX2 check was already inside.
llvm-svn: 147844
2012-01-10 08:23:59 +00:00
Craig Topper
9beee30168 Remove hasXMM/hasXMMInt functions. Move callers to hasSSE1/hasSSE2. This is the final piece to remove the AVX hack that disabled SSE.
llvm-svn: 147843
2012-01-10 06:54:16 +00:00
Craig Topper
5f6f96da91 Remove hasSSE*orAVX functions and change all callers to use just hasSSE*. AVX is now an SSE level and no longer disables SSE checks.
llvm-svn: 147842
2012-01-10 06:37:29 +00:00
Craig Topper
c9756440ea Instruction selection priority fixes to remove the XMM/XMMInt/orAVX predicates. Another commit will remove orAVX functions from X86SubTarget.
llvm-svn: 147841
2012-01-10 06:30:56 +00:00
Jakob Stoklund Olesen
f9de299af4 Accurately model hardware alignment rounding.
On Thumb, the displacement computation hardware uses the address of the
current instruction rouned down to a multiple of 4.  Include this
rounding in the UserOffset we compute for each instruction.

When inline asm is present, the instruction alignment may not be known.
Constrain the maximum displacement instead in that case.

This makes it possible for CreateNewWater() and OffsetIsInRange() to
agree about the valid displacements.  When they disagree, infinite
looping happens.

As always, test cases for this stuff are insane.

<rdar://problem/10660175>

llvm-svn: 147825
2012-01-10 01:34:59 +00:00
Rafael Espindola
6b018e0b1d Remove the logging streamer.
llvm-svn: 147820
2012-01-10 00:40:39 +00:00
Jakob Stoklund Olesen
eb6540fdbf Catch runaway ARMConstantIslandPass even in -Asserts builds.
The pass is prone to looping, and it is better to crash than loop
forever, even in a -Asserts build.

<rdar://problem/10660175>

llvm-svn: 147806
2012-01-09 22:16:24 +00:00
Devang Patel
88a31798bc Fix asm string wrt variants.
llvm-svn: 147805
2012-01-09 21:32:02 +00:00
Devang Patel
921a16318d Split AsmParser into two components - AsmParser and AsmParserVariant
AsmParser holds info specific to target parser.
AsmParserVariant holds info specific to asm variants supported by the target.

llvm-svn: 147787
2012-01-09 19:13:28 +00:00
Chandler Carruth
f762ca322b Don't rely on the fact that shift values are never very large, and thus
this substraction will result in small negative numbers at worst which
become very large positive numbers on assignment and are thus caught by
the <=4 check on the next line. The >0 check clearly intended to catch
these as negative numbers.

Spotted by inspection, and impossible to trigger given the shift widths
that can be used.

llvm-svn: 147773
2012-01-09 09:47:25 +00:00
Craig Topper
be3b3744b8 Remove AVX hack in X86Subtarget. AVX/AVX2 are now treated as an SSE level. Predicate functions have been altered to maintain previous names and behavior.
llvm-svn: 147770
2012-01-09 09:02:13 +00:00
Craig Topper
915bc4edaa Add HasAVX predicate to some of the AVX patterns.
llvm-svn: 147769
2012-01-09 08:34:00 +00:00
Craig Topper
2710d2dadb Reorder a bunch of patterns to put the AVX version first thus giving it priority over the SSE version. Another step towards trying to remove the AVX hack that disables SSE from X86Subtarget.
llvm-svn: 147768
2012-01-09 08:10:38 +00:00
Craig Topper
ee2dabebe3 Clean up patterns for MOVNT*. Not sure why there were floating point types on MOVNTPS and MOVNTDQ. And v4i64 was completely missing.
llvm-svn: 147767
2012-01-09 06:52:46 +00:00
Craig Topper
8ce58f4687 Mark MOVNTI as being supported in SSE2 OR AVX mode. This instruction has no AVX equivalent so we should use the SSE version.
llvm-svn: 147766
2012-01-09 06:38:55 +00:00
Craig Topper
bd6f3004ba Move SSE2 logical operations PAND/POR/PXOR/PANDN above SSE1 logical operations ANDPS/ORPS/XORPS/ANDNPS. This fixes a pattern ordering issue that meant that the SSE2 instructions could never be directly selected since the SSE1 patterns would always match first. This is largely moot with the ExeDepsFix pass, but I'm trying to audit for all such ordering issues.
llvm-svn: 147765
2012-01-09 05:07:01 +00:00
Craig Topper
dc81848e5b Change some places that were checking for AVX OR SSE1/2 to use hasXMM/hasXMMInt instead. Also fix one place that checked SSE3, but accidentally excluded AVX to use hasSSE3orAVX. This is a step towards removing the AVX hack from the X86Subtarget.h
llvm-svn: 147764
2012-01-09 02:28:15 +00:00
Craig Topper
1fdcf7071d Don't disable MMX support when AVX is enabled. Fix predicates for MMX instructions that were added along with SSE instructions to check for AVX in addition to SSE level.
llvm-svn: 147762
2012-01-09 00:11:29 +00:00
Craig Topper
3fb8201fcb Enable FISTTP* instructions when AVX is enabled.
llvm-svn: 147758
2012-01-08 23:04:21 +00:00
Evan Cheng
6a08916ce2 Don't forget to transfer implicit uses of return instruction.
llvm-svn: 147752
2012-01-08 20:41:16 +00:00
Victor Umansky
5d24f5f51a Reverted commit #147601 upon Evan's request.
llvm-svn: 147748
2012-01-08 17:20:33 +00:00
Jakob Stoklund Olesen
8db266fafb Match SelectionDAG logic for enabling movt.
Darwin doesn't do static, and ELF targets only support static.

llvm-svn: 147740
2012-01-07 20:49:15 +00:00
Craig Topper
260a034757 Fix typo in the X86 backend readme. Patch from Jaeden Amero.
llvm-svn: 147739
2012-01-07 20:35:21 +00:00
Benjamin Kramer
0ce9fd3032 Remove VectorExtras. This unused helper was written for a type of API that is discouraged now.
llvm-svn: 147738
2012-01-07 19:42:13 +00:00
Craig Topper
17901c2d33 Remove unnecessary check of hasAVX(). It's already included in hasXMM().
llvm-svn: 147734
2012-01-07 18:48:43 +00:00
Jakob Stoklund Olesen
71f92061aa Use getRegForValue() to materialize the address of ARM globals.
This enables basic local CSE, giving us 20% smaller code for
consumer-typeset in -O0 builds.

<rdar://problem/10658692>

llvm-svn: 147720
2012-01-07 04:07:22 +00:00
Rafael Espindola
2d545fa143 Split Finish into Finish and FinishImpl to have a common place to do end of
file error checking. Use that to error on an unfinished cfi_startproc.

The error is not nice, but is already better than a segmentation fault.

llvm-svn: 147717
2012-01-07 03:13:18 +00:00
Evan Cheng
4009ad2e33 Copy implicit defs (e.g. r0) when changing tBX_RET to tPOP_RET. This bug is
exposed with an upcoming change will would delete the copy to return register
because there is no use! It's amazing anything works.

llvm-svn: 147715
2012-01-07 02:55:54 +00:00
Jakob Stoklund Olesen
536b4e24d8 Use movw+movt in ARMFastISel::ARMMaterializeGV.
This eliminates a lot of constant pool entries for -O0 builds of code
with many global variable accesses.

This speeds up -O0 codegen of consumer-typeset by 2x because the
constant island pass no longer has to look at thousands of constant pool
entries.

<rdar://problem/10629774>

llvm-svn: 147712
2012-01-07 01:47:05 +00:00
Eric Christopher
09a41e8939 Make the 'x' constraint work for AVX registers as well.
Fixes rdar://10614894

llvm-svn: 147704
2012-01-07 01:02:09 +00:00
Jakob Stoklund Olesen
39a3fa2c29 Enable aligned NEON spilling by default.
Experiments show this to be a small speedup for modern ARM cores.

llvm-svn: 147689
2012-01-06 22:19:37 +00:00
Jakob Stoklund Olesen
4a98287c88 Abort AdjustBBOffsetsAfter early when possible.
llvm-svn: 147685
2012-01-06 21:40:15 +00:00
Chad Rosier
07ec6ac15d Initializing to false makes better sense. Thanks, David.
llvm-svn: 147679
2012-01-06 20:11:59 +00:00
Chad Rosier
a498c068f6 Fix uninitialized variable warning.
llvm-svn: 147676
2012-01-06 20:02:49 +00:00