1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-24 11:42:57 +01:00
Commit Graph

7631 Commits

Author SHA1 Message Date
Eli Friedman
9ea5599729 Fix atomic load and store on x86 to pass -verify-machineinstrs (and possibly fix some subtle bugs involving passes which check mayStore()).
This isn't exactly ideal, but it is good enough for the moment.

llvm-svn: 139245
2011-09-07 18:48:32 +00:00
James Molloy
f781d3d8e9 Refactor instprinter and mcdisassembler to take a SubtargetInfo. Add -mattr= handling to llvm-mc. Reviewed by Owen Anderson.
llvm-svn: 139237
2011-09-07 17:24:38 +00:00
Rafael Espindola
1cca4f99bd Detect attempt to use segmented stacks on non ELF systems and error
(not assert) early.

llvm-svn: 139233
2011-09-07 16:10:57 +00:00
Bill Wendling
763ed58408 Reenable compact unwind by default. However, also emit the old version of unwind
information for older linkers.

llvm-svn: 139206
2011-09-06 23:47:14 +00:00
Rafael Espindola
9182560b8f Fix comment. Noticed by Duncan.
llvm-svn: 139161
2011-09-06 19:29:31 +00:00
Duncan Sands
d1311488fe Add codegen support for vector select (in the IR this means a select
with a vector condition); such selects become VSELECT codegen nodes.
This patch also removes VSETCC codegen nodes, unifying them with SETCC
nodes (codegen was actually often using SETCC for vector SETCC already).
This ensures that various DAG combiner optimizations kick in for vector
comparisons.  Passes dragonegg bootstrap with no testsuite regressions
(nightly testsuite as well as "make check-all").  Patch mostly by
Nadav Rotem.

llvm-svn: 139159
2011-09-06 19:07:46 +00:00
Rafael Espindola
9d9df4bc1a Fix style issues and typos found by Duncan.
llvm-svn: 139154
2011-09-06 18:43:08 +00:00
Duncan Sands
6939ae53ac Split the init.trampoline intrinsic, which currently combines GCC's
init.trampoline and adjust.trampoline intrinsics, into two intrinsics
like in GCC.  While having one combined intrinsic is tempting, it is
not natural because typically the trampoline initialization needs to
be done in one function, and the result of adjust trampoline is needed
in a different (nested) function.  To get around this llvm-gcc hacks the
nested function lowering code to insert an additional parent variable
holding the adjust.trampoline result that can be accessed from the child
function.  Dragonegg doesn't have the luxury of tweaking GCC code, so it
stored the result of adjust.trampoline in the memory GCC set aside for
the trampoline itself (this is always available in the child function),
and set up some new memory (using an alloca) to hold the trampoline.
Unfortunately this breaks Go which allocates trampoline memory on the
heap and wants to use it even after the parent has exited (!).  Rather
than doing even more hacks to get Go working, it seemed best to just use
two intrinsics like in GCC.  Patch mostly by Sanjoy Das.

llvm-svn: 139140
2011-09-06 13:37:06 +00:00
Nick Lewycky
9b5a242546 Add a new MC bit for NaCl (Native Client) mode. NaCl requires that certain
instructions are more aligned than the CPU requires, and adds some additional
directives, to follow in future patches. Patch by David Meyer!

llvm-svn: 139125
2011-09-05 21:51:43 +00:00
Benjamin Kramer
902004dcd8 Use internal storage for command line option.
llvm-svn: 139079
2011-09-03 03:45:06 +00:00
Bruno Cardoso Lopes
02157d584a Add AVX versions to match AESENC/AESDEC intrinsics. This hopefully ends
the cycle of missing AVX counterparts of already present SSE* patterns

llvm-svn: 139073
2011-09-03 00:47:08 +00:00
Bruno Cardoso Lopes
c72ce24240 Add AVX version of a SSE4.1 VPBLENDVB pattern
llvm-svn: 139072
2011-09-03 00:47:05 +00:00
Bruno Cardoso Lopes
a25fc6f941 Add AVX versions of SSE4.1 EXTRACTPS patterns
llvm-svn: 139071
2011-09-03 00:47:03 +00:00
Bruno Cardoso Lopes
45d02d5eca Add AVX versions for SSE4.1 MOVZX* patterns
llvm-svn: 139070
2011-09-03 00:47:01 +00:00
Bruno Cardoso Lopes
cadec3711c Add one more AVX pattern for MOVZPQILo2PQI
llvm-svn: 139069
2011-09-03 00:46:58 +00:00
Bruno Cardoso Lopes
48eeb79003 Move PUNPCKLQDQ splat pattern close to the instruction definition and
duplicate it for AVX mode.

llvm-svn: 139068
2011-09-03 00:46:56 +00:00
Bruno Cardoso Lopes
ca90af60bd Add AVX pattern versions for PSHUFB,PSIGN{B,W,D}
llvm-svn: 139067
2011-09-03 00:46:54 +00:00
Bruno Cardoso Lopes
7fae5ca308 Add AVX versions of MOVZDI2PDI patterns. Use SUBREG_TO_REG to indicate
that the AVX versions (even the 128-bit ones) all clear the upper part
of the destination register.

llvm-svn: 139066
2011-09-03 00:46:51 +00:00
Bruno Cardoso Lopes
e749426ece Enforce subtarget checks in a few places to be explicit when the
pattern should be matched

llvm-svn: 139065
2011-09-03 00:46:49 +00:00
Bruno Cardoso Lopes
323a5b334e Tidy up code moving patterns to their appropriate place!
llvm-svn: 139064
2011-09-03 00:46:47 +00:00
Bruno Cardoso Lopes
ea1931b9d0 Add AVX versions of FsMOVAPS and FsMOVAPS. Teach X86InstrInfo how to use
it!

llvm-svn: 139063
2011-09-03 00:46:45 +00:00
Bruno Cardoso Lopes
eb041875c1 Teach X86FastISel to use AVX versions of instructions when possible
llvm-svn: 139062
2011-09-03 00:46:42 +00:00
Bruno Cardoso Lopes
86c67e11c9 Fix 80-column and style
llvm-svn: 139061
2011-09-03 00:46:40 +00:00
Bruno Cardoso Lopes
beb7a448e7 Tidy up some SSE/AVX convert intrinsics. Also add an AVX version of
OptForSize pattern

llvm-svn: 139060
2011-09-03 00:46:38 +00:00
Jakob Stoklund Olesen
ef8527b836 Pseudo CMOV instructions don't clobber EFLAGS.
The explanation about a 0 argument being materialized as xor is no
longer valid.  Rematerialization will check if EFLAGS is live before
clobbering it.

The code produced by X86TargetLowering::EmitLoweredSelect does not
clobber EFLAGS.

This causes one less testb instruction to be generated in the cmov.ll
test case.

llvm-svn: 139057
2011-09-02 23:52:55 +00:00
Jakob Stoklund Olesen
29145a3de1 Check for EFLAGS live-out before clobbering it.
It is only allowed to clobber EFLAGS at the end of a block if it isn't
live-in to any successor.

llvm-svn: 139056
2011-09-02 23:52:52 +00:00
Jakob Stoklund Olesen
6d5d51f687 Use existing function.
llvm-svn: 139055
2011-09-02 23:52:49 +00:00
Jakob Stoklund Olesen
c710d8fdc7 Remove unused variables.
llvm-svn: 139047
2011-09-02 22:41:25 +00:00
Eli Friedman
383a3c76b2 Don't fast-isel for atomic load/store; some cases require extra handling missing from fast-isel.
llvm-svn: 139044
2011-09-02 22:33:24 +00:00
Kevin Enderby
90a1526592 Change X86 disassembly to print immediates values as signed by default. Special
case those instructions that the immediate is not sign-extend.  radr://8795217

llvm-svn: 139028
2011-09-02 20:01:23 +00:00
Bill Wendling
991a1dab16 Revert r138826 until PR10834 can be fixed.
llvm-svn: 139018
2011-09-02 18:15:04 +00:00
Bruno Cardoso Lopes
10f234f1a7 Fix vbroadcast matching logic to early unmatch if the node doesn't have
only one use. Fix PR10825.

llvm-svn: 138951
2011-09-01 18:15:06 +00:00
Bruno Cardoso Lopes
8771512b75 Move more code around and duplicate AVX patterns: MOVHPS and MOVLPS
llvm-svn: 138897
2011-08-31 21:15:32 +00:00
Bruno Cardoso Lopes
22aceefbf7 Move MOVAPS,MOVUPS patterns close to the instructions definition
llvm-svn: 138896
2011-08-31 21:15:29 +00:00
Bruno Cardoso Lopes
4823fe07e6 Remove "_Int" forms of MOVUPSmr and MOVAPSmr
llvm-svn: 138895
2011-08-31 21:15:22 +00:00
Rafael Espindola
295e404961 Spelling and grammar fixes to problems found by Duncan.
llvm-svn: 138858
2011-08-31 16:43:33 +00:00
Eli Friedman
4fefb0a561 Make sure we don't crash when -miphoneos-version-min is specified on x86. Hopefully this will fix gcc testsuite failures.
llvm-svn: 138856
2011-08-31 16:19:51 +00:00
Eric Christopher
157bf8b08d Rework this conditional a bit.
Patch by Sanjoy Das

llvm-svn: 138853
2011-08-31 04:17:21 +00:00
Bruno Cardoso Lopes
5bd6e92f99 - Move all MOVSS and MOVSD patterns close to their definitions
- Duplicate some store patterns to their AVX forms!
- Catched a bug while restricting the patterns subtarget, fix it
  and update a testcase to check it properly

llvm-svn: 138851
2011-08-31 03:04:20 +00:00
Bruno Cardoso Lopes
a9c2c56e13 Remove unnecessary AVX checks
llvm-svn: 138850
2011-08-31 03:04:14 +00:00
Bruno Cardoso Lopes
fe3f3344a6 Teach more places to use VMOVAPS,VMOVUPS instead of MOVAPS,MOVUPS,
whenever AVX is enabled.

llvm-svn: 138849
2011-08-31 03:04:09 +00:00
Evan Cheng
bbabe9ff60 Fix (movhps load) lowering / pattern to match more cases. rdar://10050549
llvm-svn: 138848
2011-08-31 02:05:24 +00:00
Bill Wendling
1e8c335302 Fix off-by-one error Benjamin noticed.
llvm-svn: 138832
2011-08-30 21:23:24 +00:00
Bill Wendling
569b9fee87 Enable compact unwind info by default. This only applies to Darwin when CFI is
disabled.

llvm-svn: 138826
2011-08-30 20:54:11 +00:00
Jeffrey Yasskin
8f36e758c2 Fix C++0x narrowing errors when char is unsigned.
In the case of EDInstInfo, this would actually cause a bug when -1 became 255
and was then compared >=0 in llvm-mc/Disassembler.cpp.

llvm-svn: 138825
2011-08-30 20:53:29 +00:00
Rafael Espindola
9db302e741 Adds support for variable sized allocas. For a variable sized alloca,
code is inserted to first check if the current stacklet has enough
space. If so, space is allocated by simply decrementing the stack
pointer. Otherwise a runtime routine (__morestack_allocate_stack_space
in libgcc) is called which allocates the required memory from the
heap.

Patch by Sanjoy Das.

llvm-svn: 138818
2011-08-30 19:47:04 +00:00
Rafael Espindola
7721c15106 Adds a SelectionDAG node X86SegAlloca which will be custom lowered
from DYNAMIC_STACKALLOC.

Two new pseudo instructions (SEG_ALLOCA_32 and SEG_ALLOCA_64) which
will match X86SegAlloca (based on word size) are also added.  They
will be custom emitted to inject the actual stack handling code.

Patch by Sanjoy Das.

llvm-svn: 138814
2011-08-30 19:43:21 +00:00
Rafael Espindola
321e47cd0b Emit segmented-stack specific code into function prologues for
X86. Modify the pass added in the previous patch to call this new
code.

This new prologues generated will call a libgcc routine (__morestack)
to allocate more stack space from the heap when required

Patch by Sanjoy Das.

llvm-svn: 138812
2011-08-30 19:39:58 +00:00
Eli Friedman
4d90e53381 Explicitly zero out parts of a vector which are required to be zero by the algorithm in LowerUINT_TO_FP_i32. This only has a substantial effect on the generated code when the input is extracted from a vector register; other ways of loading an i32 do the appropriate zeroing implicitly. Fixes PR10802.
llvm-svn: 138768
2011-08-29 21:15:46 +00:00
Bruno Cardoso Lopes
3a09888a72 Move non-intruction patterns to a more appropriate place!
llvm-svn: 138744
2011-08-29 17:51:24 +00:00
Nicolas Geoffray
74b006fe71 Remove premature previous commit.
llvm-svn: 138725
2011-08-28 14:52:51 +00:00
Nicolas Geoffray
d30e51ca07 Encoding of instructions referencing segments has changed. Do what X86MCCodeEmitter does.
llvm-svn: 138723
2011-08-28 13:07:57 +00:00
Benjamin Kramer
6411b8f81a Silence GCC warnings and make an array const.
llvm-svn: 138706
2011-08-27 17:36:14 +00:00
Eli Friedman
9f95c7d381 Add support for generating CMPXCHG16B on x86-64 for the cmpxchg IR instruction.
llvm-svn: 138660
2011-08-26 21:21:21 +00:00
Craig Topper
b20cee1e19 Fix disassembling of VCVTSD2SI
llvm-svn: 138623
2011-08-26 04:49:29 +00:00
Bruno Cardoso Lopes
e6119d18de Do the same as r138461. Mark VZEROALL as clobbering all YMM registers
llvm-svn: 138592
2011-08-25 22:23:58 +00:00
Bruno Cardoso Lopes
5b3d2c9e17 Add support for AVX 256-bit version of MOVDDUP!
llvm-svn: 138588
2011-08-25 21:40:37 +00:00
Bruno Cardoso Lopes
dedd2ffa0b Make isMOVDDUP mask check more strict and update comments!
llvm-svn: 138587
2011-08-25 21:40:34 +00:00
Craig Topper
a6085b9757 Add more missing TB encodings to VEX instructions to allow them to be disassembled. Fixes remainder of PR10678.
llvm-svn: 138553
2011-08-25 08:11:01 +00:00
Craig Topper
06ed6cb856 Add TB encoding to VEROALL, VZEROUPPER, and VCVTPS2PD to allow them to be disassembled. Fixes PR10723.
llvm-svn: 138551
2011-08-25 06:57:46 +00:00
Bruno Cardoso Lopes
5d34219953 Add support for 256-bit versions of VSHUFPD and VSHUFPS.
llvm-svn: 138546
2011-08-25 02:58:26 +00:00
Bruno Cardoso Lopes
ccffe56b5d Add memory version of SHUFPD to mask decoding!
llvm-svn: 138545
2011-08-25 02:58:21 +00:00
Bruno Cardoso Lopes
dfa5cf4620 Create a section for non-instructions patterns in the beginning of the
file, and move more code around!

llvm-svn: 138521
2011-08-24 23:18:11 +00:00
Bruno Cardoso Lopes
719b357628 Move code around!
llvm-svn: 138520
2011-08-24 23:18:09 +00:00
Bruno Cardoso Lopes
3824b766ac Organize UNPCK* patterns, also add remaining for AVX.
llvm-svn: 138519
2011-08-24 23:18:06 +00:00
Bruno Cardoso Lopes
82c8bc7efd Move remaining MOVDDUP patterns close to MOVDDUP defintion and duplicate
the missing ones for AVX.

llvm-svn: 138518
2011-08-24 23:18:04 +00:00
Bruno Cardoso Lopes
d315a6b6e6 Organize and tidy up MOVDDUP section. Also update comments!
llvm-svn: 138517
2011-08-24 23:18:02 +00:00
Bruno Cardoso Lopes
762fb13cc9 Move MOVHLPS patterns close to MOVHLPS definition, and duplicate the
pattern for 128-bit AVX mode.

llvm-svn: 138516
2011-08-24 23:17:59 +00:00
Bruno Cardoso Lopes
d62766849f Move all PSHUF* patterns close to the PSHUF* definitions. Also be
explicit about which subtarget they refer to, and add AVX versions of
the ones we currently don't. Remove old and now wrong comments!

llvm-svn: 138515
2011-08-24 23:17:57 +00:00
Bruno Cardoso Lopes
122f7cfc92 Move all SHUFP* patterns close to the SHUFP* definitions. Also be
explicit about which subtarget they refer to, and add AVX versions of
the ones we currently don't. Make the mask check more strict, to be
clear it won't be used to match to 256-bit versions!

llvm-svn: 138514
2011-08-24 23:17:55 +00:00
Eli Friedman
b6597a2e70 Hook up 64-bit atomic load/store on x86-32. I plan to write more efficient implementations eventually.
llvm-svn: 138505
2011-08-24 22:33:28 +00:00
Eli Friedman
e4cd816e7b Fix whitespace.
llvm-svn: 138487
2011-08-24 21:17:30 +00:00
Eli Friedman
6f95a6ae1b Basic x86 code generation for atomic load and store instructions.
llvm-svn: 138478
2011-08-24 20:50:09 +00:00
Bruno Cardoso Lopes
734febce18 Mark VZEROALL as clobbering all YMM registers
llvm-svn: 138461
2011-08-24 18:48:33 +00:00
Evan Cheng
420bf5446c Move TargetRegistry and TargetSelect from Target to Support where they belong.
These are strictly utilities for registering targets and components.

llvm-svn: 138450
2011-08-24 18:08:43 +00:00
Craig Topper
1da38a34a6 Break 256-bit vector int add/sub/mul into two 128-bit operations to avoid costly scalarization. Fixes PR10711.
llvm-svn: 138427
2011-08-24 06:14:18 +00:00
Bruno Cardoso Lopes
8959b54713 Fix a nasty bug where a v4i64 was being wrong emitted with 32-bit
permutations. Also tidy up some patterns and make them close to their
instruction definition!

llvm-svn: 138392
2011-08-23 22:06:37 +00:00
Evan Cheng
ed13551c1d Some refactoring so TargetRegistry.h no longer has to include any files
from MC.

llvm-svn: 138367
2011-08-23 20:15:21 +00:00
Nick Lewycky
11874a4e0a PerformSubCombine to work on integers larger than i128. Fixes a crasher.
llvm-svn: 138354
2011-08-23 19:01:24 +00:00
Craig Topper
67b22aedb4 Add support for breaking 256-bit v16i16 and v32i8 VSETCC into two 128-bit ones, avoiding sclarization. Add vex form of pcmpeqq and pcmpgtq. Fixes more cases for PR10712.
llvm-svn: 138321
2011-08-23 04:36:33 +00:00
Bruno Cardoso Lopes
8024703a16 Introduce a pass to insert vzeroupper instructions to avoid AVX to
SSE transition penalty. The pass is enabled through the "x86-use-vzeroupper"
llc command line option. This is only the first step (very naive and
conservative one) to sketch out the idea, but proper DFA is coming next
to allow smarter decisions. Comments and ideas now and in further commits
will be very appreciated.

llvm-svn: 138317
2011-08-23 01:14:17 +00:00
Benjamin Kramer
bd13a6a319 X86: Add some operand types required to identify calls.
llvm-svn: 138285
2011-08-22 22:55:32 +00:00
Bruno Cardoso Lopes
8007165688 Add support for breaking 256-bit int VETCC into two 128-bit ones,
avoding scalarization of the compare. Reduces code from 59 to 6
instructions. Fix PR10712.

llvm-svn: 138271
2011-08-22 20:31:04 +00:00
Bruno Cardoso Lopes
23ff325f5b Add 128-bit AVX codegen for PCMP* family of integer instructions
llvm-svn: 138270
2011-08-22 20:31:00 +00:00
Bruno Cardoso Lopes
9979e44f1b Re-write part of VEX encoding logic, to be more easy to read! Also fix
a bug and add a testcase!

llvm-svn: 138123
2011-08-19 22:27:29 +00:00
Craig Topper
f68d77215d Add TB encoding to VEX versions of SSE fp logical operations to fix disassembler
llvm-svn: 138034
2011-08-19 05:28:50 +00:00
Bruno Cardoso Lopes
306110c29a Fix PR10677. Initial patch and idea by Peter Cooper but I've changed the
implementation!

llvm-svn: 138029
2011-08-19 02:23:56 +00:00
Bruno Cardoso Lopes
0d458d4bb3 Re-encoded 128-bit AVX versions of SQRT, RSQRT, RCP have 3 operands
instead of 2. They were already defined this way in their regular
version, but not for the intrinsics versions (*_Int), and that would work
for assembly emission but not for object code, since a MachineOperand
would be missing. This commit fix PR10697.

Also removed the {VSQRT,VRSQRT,VRCP}r_Int forms and match the intrinsic
via INSERT_SUBREG+EXTRACT_SUBREG patterns. The same couldn't be done for
memory versions because sse_load_f32/sse_load_f64 operand need special
handling and don't work like regular "addr" operands.

There are right now 114 "*_Int" and 98 "Int_*" forms! I'm slowly
removing them as I step through, but hope we can get rid of these
someday, they are really annoying :)

llvm-svn: 138012
2011-08-18 23:59:21 +00:00
Bruno Cardoso Lopes
c174d8ac48 Cleanup vector logical ops in AVX and add use int versions for simple
v2i64

llvm-svn: 137919
2011-08-18 02:11:34 +00:00
Bruno Cardoso Lopes
82795e6b41 Fix PR10688. Add support for spliting 256-bit vector shifts when the
shift amount is variable

llvm-svn: 137885
2011-08-17 22:12:20 +00:00
Owen Anderson
3146968039 Allow the MCDisassembler to return a "soft fail" status code, indicating an instruction that is disassemblable, but invalid. Only used for ARM UNPREDICTABLE instructions at the moment.
Patch by James Molloy.

llvm-svn: 137830
2011-08-17 17:44:15 +00:00
Bruno Cardoso Lopes
98531dfd08 Introduce matching patterns for vbroadcast AVX instruction. The idea is to
match splats in the form (splat (scalar_to_vector (load ...))) whenever
the load can be folded. All the logic and instruction emission is
working but because of PR8156, there are no ways to match loads, cause
they can never be folded for splats. Thus, the tests are XFAILed, but
I've tested and exercised all the logic using a relaxed version for
checking the foldable loads, as if the bug was already fixed. This
should work out of the box once PR8156 gets fixed since MayFoldLoad will
work as expected.

llvm-svn: 137810
2011-08-17 02:29:19 +00:00
Bruno Cardoso Lopes
e3bab71a4b Update comments about vector splat handling in x86
llvm-svn: 137808
2011-08-17 02:29:13 +00:00
Bruno Cardoso Lopes
4ff4ed28af Now that we have a canonical way to handle 256-bit splats:
vinsertf128 $1 + vpermilps $0, remove the old code that used to first
do the splat in a 128-bit vector and then insert it into a larger one.
This is better because the handling code gets simpler and also makes a
better room for the upcoming vbroadcast!

llvm-svn: 137807
2011-08-17 02:29:10 +00:00
Bruno Cardoso Lopes
d64294fb0a Instead of always leaving the work to the generic legalizer when
there is no support for native 256-bit shuffles, be more smart in some
cases, for example, when you can extract specific 128-bit parts and use
regular 128-bit shuffles for them. Example:

For this shuffle:
  shufflevector <4 x i64> %a, <4 x i64> %b, <4 x i32>
                <i32 1, i32 0, i32 7, i32 6>

This was expanded to:
  vextractf128  $1, %ymm1, %xmm2
  vpextrq $0, %xmm2, %rax
  vmovd %rax, %xmm1
  vpextrq $1, %xmm2, %rax
  vmovd %rax, %xmm2
  vpunpcklqdq %xmm1, %xmm2, %xmm1
  vpextrq $0, %xmm0, %rax
  vmovd %rax, %xmm2
  vpextrq $1, %xmm0, %rax
  vmovd %rax, %xmm0
  vpunpcklqdq %xmm2, %xmm0, %xmm0
  vinsertf128 $1, %xmm1, %ymm0, %ymm0
  ret

Now we get:
  vshufpd $1, %xmm0, %xmm0, %xmm0
  vextractf128  $1, %ymm1, %xmm1
  vshufpd $1, %xmm1, %xmm1, %xmm1
  vinsertf128 $1, %xmm1, %ymm0, %ymm0

llvm-svn: 137733
2011-08-16 18:21:54 +00:00
Bruno Cardoso Lopes
f026c60f3d While I'm here, remove the "_alt" hacks to a series of INSERT_SUBREG and
also add the AVX versions of the 128-bit patterns

llvm-svn: 137685
2011-08-15 23:36:51 +00:00
Bruno Cardoso Lopes
1e817d1451 Reorder declarations of vmovmskp* and also put the necessary AVX
predicate and TB encoding fields. This fix the encoding for the
attached testcase. This fixes PR10625.

llvm-svn: 137684
2011-08-15 23:36:45 +00:00
Jim Grosbach
31c0c9a1f6 MCTargetAsmParser target match predicate support.
Allow a target assembly parser to do context sensitive constraint checking
on a potential instruction match. This will be used, for example, to handle
Thumb2 IT block parsing.

llvm-svn: 137675
2011-08-15 23:03:29 +00:00
Bruno Cardoso Lopes
b81c3ed76d Fix PR10656. It's only profitable to use 128-bit inserts and extracts
when AVX mode is one. Otherwise is just more work for the type
legalizer.

llvm-svn: 137661
2011-08-15 21:45:54 +00:00
Bruno Cardoso Lopes
8bdbc680ea Fix comment!
llvm-svn: 137521
2011-08-12 21:54:42 +00:00
Bruno Cardoso Lopes
2d100ca13c The VPERM2F128 is a AVX instruction which permutes between two 256-bit
vectors. It operates on 128-bit elements instead of regular scalar
types. Recognize shuffles that are suitable for VPERM2F128 and teach
the x86 legalizer how to handle them.

llvm-svn: 137519
2011-08-12 21:48:26 +00:00
Bruno Cardoso Lopes
17ae896095 Move code around and add comments
llvm-svn: 137518
2011-08-12 21:48:22 +00:00
Duncan Sands
10a9e984bc Silence a bunch (but not all) "variable written but not read" warnings
when building with assertions disabled.

llvm-svn: 137460
2011-08-12 14:54:45 +00:00
Andrew Trick
68f830e252 findDeadCallerSavedReg fix: Missing NULL terminator in register arrays.
Fix by Ivan Baev. Sorry I don't have a unit test, but the fix is obvious so I don't want to delay it.

llvm-svn: 137404
2011-08-12 00:49:19 +00:00
Bruno Cardoso Lopes
328a6a980b Add a dag combine to xform 256-bit shuffles into simple vector
inserts and extracts. This simple combine makes us generate only 1
instruction instead of 11 in the v8 case.

llvm-svn: 137362
2011-08-11 21:50:44 +00:00
Bruno Cardoso Lopes
38d4afa02f Fix PR10492 by teaching MOVHLPS and MOVLPS mask matching to be more strict.
llvm-svn: 137324
2011-08-11 18:59:13 +00:00
Nadav Rotem
2461d23062 Add a comment, per Bruno's CR.
llvm-svn: 137313
2011-08-11 17:05:47 +00:00
Nadav Rotem
de1b485f3f [AVX] If the data which is going to be saved is already in two XMM registers
(for example, after integer operation), do not pack the registers into a YMM
before saving. Its better to save as two XMM registers.

Before:
                vinsertf128         $1, %xmm3, %ymm0, %ymm3
                vinsertf128         $0, %xmm1, %ymm3, %ymm1
                vmovaps              %ymm1, 416(%rsp)

After:
                vmovaps              %xmm3, 416+16(%rsp)
                vmovaps              %xmm1, 416(%rsp)

llvm-svn: 137308
2011-08-11 16:41:21 +00:00
Bruno Cardoso Lopes
4106caa9af Cleanup: Remove Int_ CVTSS2SI* forms
llvm-svn: 137297
2011-08-11 02:52:36 +00:00
Bruno Cardoso Lopes
8674ddf55a Splats for v8i32/v8f32 can be handled by VPERMILPSY. This was causing
infinite recursive calls in legalize. Fix PR10562

llvm-svn: 137296
2011-08-11 02:49:44 +00:00
Bruno Cardoso Lopes
954ac403c7 Use the splat index to generate the desired shuffle. Otherwise we
could only get undefs and the vector shuffle becomes an undef,
generating wrong code.

llvm-svn: 137295
2011-08-11 02:49:41 +00:00
Eli Friedman
17bd9e5d7c Fix X86TargetLowering::LowerExternalSymbol so that it actually works in non-trivial cases. This hasn't been an issue before because the function isn't normally called (but apparently is used to generate a tail-call to sin() on ELF x86-32 with PIC and SSE2).
Fixes PR9693.

llvm-svn: 137292
2011-08-11 01:48:05 +00:00
Nadav Rotem
4a8d78d24a When performing a truncating store, it is sometimes possible to rearrange the
data in-register prior to saving to memory.  When we reorder the data in memory
we prevent the need to save multiple scalars to memory, making a single regular
store.

llvm-svn: 137238
2011-08-10 19:30:14 +00:00
Bruno Cardoso Lopes
565ab1542a The following X86 pattern is incorrect:
def : Pat<(X86Movss VR128:$src1,
                   (bc_v4i32 (v2i64 (load addr:$src2)))),
          (MOVLPSrm VR128:$src1, addr:$src2)>;
This matches a MOVSS dag with a MOVLPS instruction. However, MOVSS will replace only the low 32 bits of the register, while the MOVLPS instruction will replace the low 64 bits. A testcase is added and illustrates the bug and also modified the one that was already present. Patch by Tanya Lattner.

llvm-svn: 137227
2011-08-10 17:45:17 +00:00
Bruno Cardoso Lopes
4a435a361d Fix a bug in vpermilps mask checking. Fix PR10560
llvm-svn: 137194
2011-08-10 01:54:17 +00:00
Bruno Cardoso Lopes
9a695724bd Add 256-bit support for v8i32, v4i64 and v4f64 ISD::SELECT. Fix PR10556
llvm-svn: 137179
2011-08-09 23:27:13 +00:00
Bruno Cardoso Lopes
7461b930f3 Add v16i16 and v32i8 store patterns
llvm-svn: 137166
2011-08-09 22:39:53 +00:00
Bruno Cardoso Lopes
028c6aa951 Use fp unpack instructions to unpack int types. Until we have AVX2, this
is the best we can do for these patterns. This fix PR10554.

llvm-svn: 137161
2011-08-09 22:18:37 +00:00
Eli Friedman
44fd5b2b59 Fix a couple ridiculous copy-paste errors. rdar://9914773 .
llvm-svn: 137160
2011-08-09 22:17:39 +00:00
Bruno Cardoso Lopes
633400ee00 Reapply a more appropriate solution than in r137114. AVX supports
v4f64 = sitofp v4i32. This fix PR10559.
Also add support for v4i32 = fptosi v4f64.

llvm-svn: 137128
2011-08-09 17:39:13 +00:00
Bruno Cardoso Lopes
1962a341d8 Revert r137114
llvm-svn: 137127
2011-08-09 17:39:01 +00:00
Bruno Cardoso Lopes
5dac86dac6 Handle sitofp between v4f64 <- v4i32. Fix PR10559
llvm-svn: 137114
2011-08-09 05:48:01 +00:00
Bruno Cardoso Lopes
d521431558 Add support for avx vector fextend
llvm-svn: 137105
2011-08-09 03:04:29 +00:00
Bruno Cardoso Lopes
09a727298f Add AVX versions of 128-bit sitofp and fptosi
llvm-svn: 137104
2011-08-09 03:04:25 +00:00
Bruno Cardoso Lopes
1025d1eb3b Add two patterns to match special vmovss and vmovsd cases. Also fix
the patterns already there to be more strict regarding the predicate.
This fixes PR10558

llvm-svn: 137100
2011-08-09 01:43:09 +00:00
Bruno Cardoso Lopes
d7eac41193 Make LowerVSETCC aware of AVX types and add patterns to match them.
llvm-svn: 137090
2011-08-09 00:46:57 +00:00
Bruno Cardoso Lopes
d8534855ff Add support for several vector shifts operations while in AVX mode. Fix PR10581
llvm-svn: 137067
2011-08-08 21:31:08 +00:00
Jakob Stoklund Olesen
9451389166 Hoist hasLoadFromStackSlot and hasStoreToStackSlot.
These the methods are target-independent since they simply scan the
memory operands.  They can live in TargetInstrInfoImpl.

llvm-svn: 137063
2011-08-08 20:53:24 +00:00
Jakob Stoklund Olesen
85931574b0 Don't clobber pending ST regs when FP regs are killed.
X86FloatingPoint keeps track of pending ST registers for an upcoming
inline asm instruction with fixed stack register constraints.  It does
this by remembering which FP register holds the value that should appear
at a fixed stack position for the inline asm.

When that FP register is killed before the inline asm, make sure to
duplicate it to a scratch register, so the ST register still has a live
FP reference.

This could happen when the same FP register was copied to two ST
registers, or when a spill instruction is inserted between the ST copy
and the inline asm.

This fixes PR10602.

llvm-svn: 137050
2011-08-08 17:15:43 +00:00
Chandler Carruth
520ba0fe44 Silence unused variable warnings in release builds.
llvm-svn: 136956
2011-08-05 01:08:21 +00:00
Jason W Kim
82c4669079 Fix http://llvm.org/bugs/show_bug.cgi?id=10583\n - test for 1 and 2 byte fixups to be added
llvm-svn: 136954
2011-08-05 00:53:03 +00:00
Evan Cheng
12fbbde46a Fix an obvious type. Patch by Ivan Krasin.
llvm-svn: 136899
2011-08-04 18:38:15 +00:00
Duncan Sands
c7008b3610 Add obviously missing "break". Noticed by Andrey Karpov with
the PVS-studio tool.

llvm-svn: 136878
2011-08-04 15:45:59 +00:00
Jason W Kim
18ca6290c9 Fix http://llvm.org/bugs/show_bug.cgi?id=10568
Move the reloc size assert into AsmBackend - where it is more apropos.

llvm-svn: 136855
2011-08-04 00:38:45 +00:00
Bill Wendling
60e17f8212 Only access both operands of an INSERT_SUBVECTOR if it is an INSERT_SUBVECTOR.
Fixes PR10527.

llvm-svn: 136853
2011-08-04 00:32:58 +00:00
Benjamin Kramer
6fbd7b63c2 Remove unused variables.
llvm-svn: 136803
2011-08-03 19:53:48 +00:00
Jakob Stoklund Olesen
002075193b Handle IMPLICIT_DEF instructions in X86FloatingPoint.
This fixes PR10575.

llvm-svn: 136787
2011-08-03 16:33:19 +00:00
Eli Friedman
d185fa3deb Don't create a ridiculous EXTRACT_ELEMENT. PR10563.
The testcase looks extremely fragile, so I'm adding an assertion which should catch any cases like this.

llvm-svn: 136711
2011-08-02 18:38:35 +00:00
Bruno Cardoso Lopes
ac0984dc7e Make this kind of lowering to be supported by 256-bit instructions:
shuffle (scalar_to_vector (load (ptr + 4))), undef, <0, 0, 0, 0>
To:
  shuffle (vload ptr)), undef, <1, 1, 1, 1>
Fix PR10494

llvm-svn: 136691
2011-08-02 16:06:18 +00:00
Nick Lewycky
3a0577c3fa Bail from FastISel when we encounter a volatile memset intrinsic. Patch by Ivan
Krasin!

llvm-svn: 136663
2011-08-02 00:40:16 +00:00
Bruno Cardoso Lopes
771876cade Add v4f64 -> v2f32 fp_round support. Also add a testcase to exercise
the legalizer. This commit together with the two previous ones fixes
PR10495.

llvm-svn: 136654
2011-08-01 21:54:09 +00:00
Bruno Cardoso Lopes
0db437e0d6 Teach PreprocessISelDAG to be aware of vector types and to not process them.
llvm-svn: 136653
2011-08-01 21:54:05 +00:00
Bruno Cardoso Lopes
b7ea699afa Lower CONCAT_VECTORS to use two VINSERTF128 instructions instead of
using a stack store.

llvm-svn: 136652
2011-08-01 21:54:02 +00:00
Bruno Cardoso Lopes
d3a5171087 Since vectors with all ones can't be created with a 256-bit instruction,
avoid returning early for v8i32 types, which would only be valid for
vector with all zeros. Also split the handling of zeros and ones into separate
checking logic since they are handled differently. This fixes PR10547

llvm-svn: 136642
2011-08-01 19:51:53 +00:00
Douglas Gregor
1c34b16e1a Update CMake target names for tablegen-generated data in the X86 and ARM targets. This should fix the CMake build with MSVC.
llvm-svn: 136621
2011-08-01 16:29:27 +00:00
Eli Friedman
6f2419f1a2 Misc optimizer+codegen work for 'cmpxchg' and 'atomicrmw'. They appear to be
working on x86 (at least for trivial testcases); other architectures will
need more work so that they actually emit the appropriate instructions for
orderings stricter than 'monotonic'. (As far as I can tell, the ARM, PPC,
Mips, and Alpha backends need such changes.)

llvm-svn: 136457
2011-07-29 03:05:32 +00:00
Bruno Cardoso Lopes
871df895f4 Fix two tests that I crashed in the previous commits. The mask elts
on the second half must be reindexed.

llvm-svn: 136454
2011-07-29 02:05:28 +00:00
Bruno Cardoso Lopes
2b3d85d81c Match VPERMIL masks more strictly and update the target specific mask
generation to always catch the weird cases.

llvm-svn: 136453
2011-07-29 01:31:15 +00:00
Bruno Cardoso Lopes
04aa91a2ae Add DecodeShuffle shuffle support for VPERMIPD variantes
llvm-svn: 136452
2011-07-29 01:31:11 +00:00
Bruno Cardoso Lopes
473d982caf Add v8i32 and v4i64 vpermil patterns
llvm-svn: 136451
2011-07-29 01:31:07 +00:00
Bruno Cardoso Lopes
bc6ed358f2 Fix a bug while generating target specific VPERMIL masks: skip
undef mask elements. This fixes PR10529.

llvm-svn: 136450
2011-07-29 01:31:04 +00:00
Bruno Cardoso Lopes
5a2e46d170 Enable usage of SSE4 extracts and inserts in their 128-bit AVX forms.
Also tidy up code a bit.

llvm-svn: 136449
2011-07-29 01:31:02 +00:00
Bruno Cardoso Lopes
02bbf20b02 Cleanup PALIGNR handling and remove the old palign pattern fragment.
Also make PALIGNR masks to don't match 256-bits, which isn't supported
It's also a step to solve PR10489

llvm-svn: 136448
2011-07-29 01:30:59 +00:00
Chandler Carruth
f7890e34b9 Rewrite the CMake build to use explicit dependencies between libraries,
specified in the same file that the library itself is created. This is
more idiomatic for CMake builds, and also allows us to correctly specify
dependencies that are missed due to bugs in the GenLibDeps perl script,
or change from compiler to compiler. On Linux, this returns CMake to
a place where it can relably rebuild several targets of LLVM.

I have tried not to change the dependencies from the ones in the current
auto-generated file. The only places I've really diverged are in places
where I was seeing link failures, and added a dependency. The goal of
this patch is not to start changing the dependencies, merely to move
them into the correct location, and an explicit form that we can control
and change when necessary.

This also removes a serialization point in the build because we don't
have to scan all the libraries before we begin building various tools.
We no longer have a step of the build that regenerates a file inside the
source tree. A few other associated cleanups fall out of this.

This isn't really finished yet though. After talking to dgregor he urged
switching to a single CMake macro to construct libraries with both
sources and dependencies in the arguments. Migrating from the two macros
to that style will be a follow-up patch.

Also, llvm-config is still generated with GenLibDeps.pl, which means it
still has slightly buggy dependencies. The internal CMake
'llvm-config-like' macro uses the correct explicitly specified
dependencies however. A future patch will switch llvm-config generation
(when using CMake) to be based on these deps as well.

This may well break Windows. I'm getting a machine set up now to dig
into any failures there. If anyone can chime in with problems they see
or ideas of how to solve them for Windows, much appreciated.

llvm-svn: 136433
2011-07-29 00:14:25 +00:00
Oscar Fuentes
af4fae1c0f Explicitly declare a library dependency of LLVM*Desc to
LLVM*AsmPrinter.

GenLibDeps.pl fails to detect vtable references. As this is the only
referenced symbol from LLVM*Desc to LLVM*AsmPrinter on optimized
builds, the algorithm that creates the list of libraries to be linked
into tools doesn't know about the dependency and sometimes places the
libraries on the wrong order, yielding error messages like this:

../../lib/libLLVMARMDesc.a(ARMMCTargetDesc.cpp.o): In function
`llvm::ARMInstPrinter::ARMInstPrinter(llvm::MCAsmInfo const&)':
ARMMCTargetDesc.cpp:(.text._ZN4llvm14ARMInstPrinterC1ERKNS_9MCAsmInfoE
[llvm::ARMInstPrinter::ARMInstPrinter(llvm::MCAsmInfo
const&)]+0x2a): undefined reference to `vtable for
llvm::ARMInstPrinter'

llvm-svn: 136328
2011-07-28 02:33:52 +00:00
Bruno Cardoso Lopes
e0d86eec4f Invert the subvector insertion to be more likely to be taken as a COPY
llvm-svn: 136324
2011-07-28 01:26:53 +00:00
Bruno Cardoso Lopes
e24a043703 Add patterns to generate copies for extract_subvector instead of
using vextractf128. This will reduce the number of issued instruction
for several avx codes.

llvm-svn: 136323
2011-07-28 01:26:50 +00:00
Bruno Cardoso Lopes
73945bf79a movd/movq write zeros in the high 128-bit part of the vector. Use
them to match 256-bit scalar_to_vector+zext.

llvm-svn: 136322
2011-07-28 01:26:46 +00:00
Bruno Cardoso Lopes
1f63a37172 Add a few patterns to match allzeros without having to use the fp unit.
Take advantage that the 128-bit vpxor zeros the higher part and use it.
This also fixes PR10491

llvm-svn: 136321
2011-07-28 01:26:43 +00:00
Bruno Cardoso Lopes
06d8be564f Add SINT_TO_FP and FP_TO_SINT support for v8i32 types. Also move
a convert pattern close to the instruction definition.

llvm-svn: 136320
2011-07-28 01:26:39 +00:00
Evan Cheng
04762a3cf5 Emit an error is asm parser parsed X86_64 only registers, e.g. %rax, %sil.
This can happen in cases where TableGen generated asm matcher cannot check
whether a register operand is in the right register class. e.g. mem operands.

rdar://8204588

llvm-svn: 136292
2011-07-27 23:22:03 +00:00
Kevin Enderby
9adbbfffd0 Fix llvm-mc handing of x86 instructions that take 8-bit unsigned immediates.
llvm-mc gives an "invalid operand" error for instructions that take an unsigned
immediate which have the high bit set such as:
    pblendw $0xc5, %xmm2, %xmm1
llvm-mc treats all x86 immediates as signed values and range checks them.
A small number of x86 instructions use the imm8 field as a set of bits.
This change only changes those instructions and where the high bit is not
ignored.  The others remain unchanged.

llvm-svn: 136287
2011-07-27 23:01:50 +00:00
Eli Friedman
842ea169de Code generation for 'fence' instruction.
llvm-svn: 136283
2011-07-27 22:21:52 +00:00
Eli Friedman
1a80401da2 X86ISD::MEMBARRIER does not require SSE2; it doesn't actually generate any code, and all x86 processors will honor the required semantics.
llvm-svn: 136249
2011-07-27 19:43:50 +00:00
Jeffrey Yasskin
8a0f9f17a3 Explicitly cast narrowing conversions inside {}s that will become errors in
C++0x.

llvm-svn: 136211
2011-07-27 06:22:51 +00:00
Bruno Cardoso Lopes
9b12ad069e Move some code around to open opportunity for more shuffle matching
llvm-svn: 136201
2011-07-27 00:56:37 +00:00
Bruno Cardoso Lopes
8830fde434 The vpermilps and vpermilpd have different behaviour regarding the
usage of the shuffle bitmask. Both work in 128-bit lanes without
crossing, but in the former the mask of the high part is the same
used by the low part while in the later both lanes have independent
masks. Handle this properly and and add support for vpermilpd.

llvm-svn: 136200
2011-07-27 00:56:34 +00:00
Bruno Cardoso Lopes
1adb959ee8 Remove more dead code!
llvm-svn: 136199
2011-07-27 00:56:27 +00:00
Evan Cheng
bff9934d9a Support .code32 and .code64 in X86 assembler.
llvm-svn: 136197
2011-07-27 00:38:12 +00:00
Benjamin Kramer
bfc2dfe3f7 Add a neat little two's complement hack for x86.
On x86 we can't encode an immediate LHS of a sub directly. If the RHS comes from a XOR with a constant we can
fold the negation into the xor and add one to the immediate of the sub. Then we can turn the sub into an add,
which can be commuted and encoded efficiently.

This code is generated for __builtin_clz and friends.

llvm-svn: 136167
2011-07-26 22:42:13 +00:00
Bruno Cardoso Lopes
e53bb853ea Recognize unpckh* masks and match 256-bit versions. The new versions are
different from the previous 128-bit because they work in lanes.
Update a few comments and add testcases

llvm-svn: 136157
2011-07-26 22:03:40 +00:00
Eli Friedman
4e16c5341a Prevent x86-specific DAGCombine from creating nodes with illegal type (which could not be selected). Fixes a minor isel issue that was breaking the testcase from r136130.
llvm-svn: 136148
2011-07-26 21:02:58 +00:00
Bruno Cardoso Lopes
a493ad3938 Remove now unused patterns. 0 insertions(+), 98 deletions(-)
llvm-svn: 136109
2011-07-26 18:22:39 +00:00
Bruno Cardoso Lopes
b24e958ffb Cleanup old matching for PUNPCK* variants
llvm-svn: 136108
2011-07-26 18:22:27 +00:00
Bill Wendling
7f89bce2be The compact unwinding offsets are divided by 8 on 64-bit machines.
llvm-svn: 136065
2011-07-26 08:03:49 +00:00
Bruno Cardoso Lopes
ab40a57cce Add 256-bit isel for movsldup/movshdup
llvm-svn: 136051
2011-07-26 02:39:32 +00:00
Bruno Cardoso Lopes
85d8ca32c9 More movsldup/movshdup cleanup. Rewrite the mask matching function and add
support for 256-bit versions (but no instruction selection yet, coming next).

llvm-svn: 136050
2011-07-26 02:39:28 +00:00
Bruno Cardoso Lopes
2359b1d351 More cleanup, subtarget info isn't used here.
llvm-svn: 136049
2011-07-26 02:39:25 +00:00
Bruno Cardoso Lopes
cde45ac9ca Add 128-bit AVX versions of movshdup/mosldup
llvm-svn: 136048
2011-07-26 02:39:23 +00:00
Bruno Cardoso Lopes
25698b90e9 Cleanup movsldup/movshdup matching.
27 insertions(+), 62 deletions(-)

llvm-svn: 136047
2011-07-26 02:39:13 +00:00
Evan Cheng
8e380354c4 Rename createCodeEmitter to createMCCodeEmitter; createObjectStreamer to createMCObjectStreamer.
llvm-svn: 136031
2011-07-26 00:42:34 +00:00
Evan Cheng
2e96785311 Rename TargetAsmParser to MCTargetAsmParser and TargetAsmLexer to MCTargetAsmLexer; rename createAsmLexer to createMCAsmLexer and createAsmParser to createMCAsmParser.
llvm-svn: 136027
2011-07-26 00:24:13 +00:00
Chandler Carruth
a571c5d5d3 Clean up a pile of hacks in our CMake build relating to TableGen.
The first problem to fix is to stop creating synthetic *Table_gen
targets next to all of the LLVM libraries. These had no real effect as
CMake specifies that add_custom_command(OUTPUT ...) directives (what the
'tablegen(...)' stuff expands to) are implicitly added as dependencies
to all the rules in that CMakeLists.txt.

These synthetic rules started to cause problems as we started more and
more heavily using tablegen files from *subdirectories* of the one where
they were generated. Within those directories, the set of tablegen
outputs was still available and so these synthetic rules added them as
dependencies of those subdirectories. However, they were no longer
properly associated with the custom command to generate them. Most of
the time this "just worked" because something would get to the parent
directory first, and run tablegen there. Once run, the files existed and
the build proceeded happily. However, as more and more subdirectories
have started using this, the probability of this failing to happen has
increased. Recently with the MC refactorings, it became quite common for
me when touching a large enough number of targets.

To add insult to injury, several of the backends *tried* to fix this by
adding explicit dependencies back to the parent directory's tablegen
rules, but those dependencies didn't work as expected -- they weren't
forming a linear chain, they were adding another thread in the race.

This patch removes these synthetic rules completely, and adds a much
simpler function to declare explicitly that a collection of tablegen'ed
files are referenced by other libraries. From that, we can add explicit
dependencies from the smaller libraries (such as every architectures
Desc library) on this and correctly form a linear sequence. All of the
backends are updated to use it, sometimes replacing the existing attempt
at adding a dependency, sometimes adding a previously missing dependency
edge.

Please let me know if this causes any problems, but it fixes a rather
persistent and problematic source of build flakiness on our end.

llvm-svn: 136023
2011-07-26 00:09:08 +00:00
Evan Cheng
2a0a4e1a73 Rename TargetAsmBackend to MCAsmBackend; rename createAsmBackend to createMCAsmBackend.
llvm-svn: 136010
2011-07-25 23:24:55 +00:00
Bruno Cardoso Lopes
c94d6a2d2c Codegen allonesvector better while using AVX: vpcmpeqd + vinsertf128
This also fixes PR10452

llvm-svn: 136004
2011-07-25 23:05:32 +00:00
Bruno Cardoso Lopes
f457bc8120 Add remaining 256-bit vector bitcasts. This also fixes PR10451
llvm-svn: 136003
2011-07-25 23:05:28 +00:00
Bruno Cardoso Lopes
9380919dc5 - Handle special scalar_to_vector case: splats. Using a native 128-bit
shuffle before inserting on a 256-bit vector.
- Add AVX versions of movd/movq instructions
- Introduce a few COPY patterns to match insert_subvector instructions.
This turns a trivial insert_subvector instruction into a register copy,
coalescing the xmm into a ymm and avoid emiting on more instruction.

llvm-svn: 136002
2011-07-25 23:05:25 +00:00
Bruno Cardoso Lopes
c13bd4fd8c Reintroduce r135730, this is indeed the right approach, there is no
native 256-bit vector instruction to do scalar_to_vector.

llvm-svn: 136001
2011-07-25 23:05:16 +00:00
Benjamin Kramer
ba702bf08a Add a note about efficient codegen for binary log.
llvm-svn: 135996
2011-07-25 22:30:00 +00:00
Eli Friedman
234bbb2b95 Get rid of an incorrect optimization for shuffles with PALIGNR and simplify isPALIGNRMask.
Addresses PR10466, although the crash from that PR only triggers in cases where DAGCombine misses optimizing a shuffle.

llvm-svn: 135980
2011-07-25 21:36:45 +00:00
Evan Cheng
040076bda2 Separate MCInstPrinter registration from AsmPrinter registration.
llvm-svn: 135974
2011-07-25 21:20:24 +00:00
Evan Cheng
1d04cd9b26 Fix last bits of MC layer issues. llvm-mc doesn't need to initialize TargetMachine's anymore.
llvm-svn: 135963
2011-07-25 20:53:02 +00:00
Evan Cheng
5897422319 Code clean up.
llvm-svn: 135954
2011-07-25 20:18:48 +00:00
Bill Wendling
9a3aca1f35 Update the comment. This feature is available only on Darwin at the moment. Though it's not Darwin-specific.
llvm-svn: 135951
2011-07-25 20:15:15 +00:00
Oscar Fuentes
06558951e6 Unbreak the build.
llvm-svn: 135949
2011-07-25 20:13:36 +00:00
Evan Cheng
bbacc163bb More refactoring.
llvm-svn: 135939
2011-07-25 19:33:48 +00:00
Evan Cheng
6fb04ad32e Refactor X86 target to separate MC code from Target code.
llvm-svn: 135930
2011-07-25 18:43:53 +00:00
Bill Wendling
713310d300 Changed disabled code into a flag.
llvm-svn: 135924
2011-07-25 18:04:49 +00:00
Bill Wendling
b910d857ef Remove dead variable.
llvm-svn: 135923
2011-07-25 18:01:27 +00:00
Bill Wendling
3817554121 After we've modified the prolog to save volatile registers, generate the compact
unwind encoding for that function. This simply crawls through the prolog looking
for machine instrs marked as "frame setup". It can calculate from these what the
compact unwind should look like.

This is currently disabled because of needed linker support. But initial tests
look good.

llvm-svn: 135922
2011-07-25 18:00:28 +00:00