1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 12:02:58 +02:00
Commit Graph

2331 Commits

Author SHA1 Message Date
Bill Wendling
648c3ccd7b Use the predicate methods off of AttributeSet instead of Attribute.
llvm-svn: 171257
2012-12-30 13:50:49 +00:00
Bill Wendling
e0920e4122 Remove the Function::getFnAttributes method in favor of using the AttributeSet
directly.

This is in preparation for removing the use of the 'Attribute' class as a
collection of attributes. That will shift to the AttributeSet class instead.

llvm-svn: 171253
2012-12-30 10:32:01 +00:00
Craig Topper
175e8218c5 Remove intrinsic specific instructions for (V)SQRTPS/PD. Instead lower to target-independent ISD nodes and use the existing patterns for those.
llvm-svn: 171237
2012-12-29 18:18:20 +00:00
Craig Topper
84898044c9 Merge similar functionality using a nested switch.
llvm-svn: 171229
2012-12-29 17:19:06 +00:00
Craig Topper
93fdde7fff Remove intrinsic specific instructions for SSE/SSE2/AVX floating point max/min instructions. Lower them to target specific nodes and use those patterns instead. This also allows them to be commuted if UnsafeFPMath is enabled.
llvm-svn: 171227
2012-12-29 16:44:25 +00:00
Jakub Staszak
f9fc85f71d Simplify code, no functionality change.
llvm-svn: 171226
2012-12-29 15:57:26 +00:00
Nadav Rotem
c8ec143f05 CostModel: initial checkin for code that estimates the cost of special shuffles.
llvm-svn: 171180
2012-12-28 08:19:03 +00:00
Nadav Rotem
d5216d7ac7 wrap 80-col lines.
llvm-svn: 171179
2012-12-28 07:28:43 +00:00
Nadav Rotem
8f2be23305 AVX: Move the ZEXT/ANYEXT DAGCo optimizations to the lowering of these optimizations. The old test cases still cover all of these lowering/optimizations. The single change that we have is that now anyext does not need to zero a register, because it does not use the exact code path as the zero_extend.
llvm-svn: 171178
2012-12-28 05:45:24 +00:00
Nadav Rotem
6850870c3f Reverse the 'if' condition and reduce the indentation.
llvm-svn: 171172
2012-12-27 23:08:05 +00:00
Nadav Rotem
051a0524a1 AVX/AVX2: Move the SEXT lowering code from a target specific DAGco to a lowering function.
llvm-svn: 171170
2012-12-27 22:47:16 +00:00
Nadav Rotem
2f6e04c7be On AVX/AVX2 the type v8i1 is legalized to v8i16, which is an XMM sized
register. In most cases we actually compare or select YMM-sized registers
and mixing the two types creates horrible code. This commit optimizes
some of the transition sequences.

PR14657.

llvm-svn: 171148
2012-12-27 08:15:45 +00:00
Nadav Rotem
76034b9bb4 AVX/AVX2: Move the code that lowers vector-trunc from a DAGCo-hook to custom lowering hook.
The vector truncs were scalarized during LegalizeVectorOps, later vectorized again by some DAGCombine optimization
and finally, lowered by a dagcombing optimization. Now, they are properly lowered during LegalizeVectorOps.
No new testcase because the original testcases still work.

llvm-svn: 171146
2012-12-27 07:45:10 +00:00
Nadav Rotem
715148a69d Reformat the docs.
llvm-svn: 171091
2012-12-26 04:59:20 +00:00
Benjamin Kramer
e4eabce005 X86: Shave off one shuffle from the pcmpeqq sequence for SSE2 by making use of and commutativity.
llvm-svn: 171064
2012-12-25 13:09:08 +00:00
Benjamin Kramer
2e313ba01d X86: Custom lower <2 x i64> eq and ne when SSE41 is not available.
pcmpeqd, pshufd, pshufd, pand is cheaper than unpack + cmpq, sbbq, cmpq, sbbq + pack.
Small speedup on loop-vectorized viterbi (-march=core2).

llvm-svn: 171063
2012-12-25 12:54:19 +00:00
Nick Lewycky
3a64ce7001 Quiet gcc's -Wparenthesis warning. No functionality change.
llvm-svn: 171044
2012-12-24 19:58:45 +00:00
Nadav Rotem
db133d1c52 whitespace
llvm-svn: 170997
2012-12-23 07:33:44 +00:00
Nadav Rotem
e237376e62 Loop Vectorizer: Update the cost model of scatter/gather operations and make
them more expensive.

llvm-svn: 170995
2012-12-23 07:23:55 +00:00
Benjamin Kramer
52d36d0ccd X86: Turn mul of <4 x i32> into pmuludq when no SSE4.1 is available.
pmuludq is slow, but it turns out that all the unpacking and packing of the
scalarized mul is even slower. 10% speedup on loop-vectorized paq8p.

llvm-svn: 170985
2012-12-22 16:07:56 +00:00
Benjamin Kramer
a9cc52a6b3 X86: Emit vector sext as shuffle + sra if vpmovsx is not available.
Also loosen the SSSE3 dependency a bit, expanded pshufb + psra is still better
than scalarized loads. Fixes PR14590.

llvm-svn: 170984
2012-12-22 11:34:28 +00:00
Benjamin Kramer
9297ff5d80 X86: Match pmin/pmax as a target specific dag combine. This occurs during vectorization.
Part of PR14667.

llvm-svn: 170908
2012-12-21 17:46:58 +00:00
Benjamin Kramer
06ec6482f0 X86: Match the SSE/AVX min/max vector ops using a custom node instead of intrinsics
This is very mechanical, no functionality change. Preparation for PR14667.

llvm-svn: 170898
2012-12-21 14:04:55 +00:00
Nadav Rotem
3d4c6351cf Improve the X86 cost model for loads and stores.
llvm-svn: 170830
2012-12-21 01:33:59 +00:00
Patrik Hagglund
ed576de43f Change TargetLowering::getTypeForExtArgOrReturn to take and return
MVTs, instead of EVTs.

llvm-svn: 170537
2012-12-19 12:02:25 +00:00
Patrik Hagglund
0515a9093d Change TargetLowering::findRepresentativeClass to take an MVT, instead
of EVT.

llvm-svn: 170532
2012-12-19 11:30:36 +00:00
NAKAMURA Takumi
dead4f722b X86ISelLowering.cpp: Fix warnings. [-Wlogical-op-parentheses]
llvm-svn: 170523
2012-12-19 10:12:48 +00:00
Elena Demikhovsky
12a5e01a52 Optimized load + SIGN_EXTEND patterns in the X86 backend.
llvm-svn: 170506
2012-12-19 07:50:20 +00:00
Bill Wendling
56d9c4b832 Rename the 'Attributes' class to 'Attribute'. It's going to represent a single attribute in the future.
llvm-svn: 170502
2012-12-19 07:18:57 +00:00
Jakub Staszak
837a7e5c18 Reverse order of checking SSE level when calculating compare cost, so we check
AVX2 before AVX.

llvm-svn: 170464
2012-12-18 22:57:56 +00:00
Craig Topper
1c6c752718 Simplify BMI ANDN matching to use patterns instead of a DAG combine. Also add ANDN to isDefConvertible.
llvm-svn: 170305
2012-12-17 05:12:30 +00:00
Benjamin Kramer
8d191a2586 X86: Add a couple of target-specific dag combines that turn VSELECTS into psubus if possible.
We match the pattern "x >= y ? x-y : 0" into "subus x, y" and two special cases
if y is a constant. DAGCombiner canonicalizes those so we first have to undo the
canonicalization for those cases. The pattern occurs in gzip when the loop
vectorizer is enabled. Part of PR14613.

llvm-svn: 170273
2012-12-15 16:47:44 +00:00
Nadav Rotem
92f9e3ce2c TypeLegalizer: Do not generate target specific nodes with illegal types, because we cant type-legalize them.
llvm-svn: 170245
2012-12-14 21:20:37 +00:00
Evan Cheng
e42df0ea81 Sorry about the churn. One more change to getOptimalMemOpType() hook. Did I
mention the inline memcpy / memset expansion code is a mess?

This patch split the ZeroOrLdSrc argument into two: IsMemset and ZeroMemset.
The first indicates whether it is expanding a memset or a memcpy / memmove.
The later is whether the memset is a memset of zero. It's totally possible
(likely even) that targets may want to do different things for memcpy and
memset of zero.

llvm-svn: 169959
2012-12-12 02:34:41 +00:00
Evan Cheng
d1c2821678 - Rename isLegalMemOpType to isSafeMemOpType. "Legal" is a very overloade term.
Also added more comments to explain why it is generally ok to return true.
- Rename getOptimalMemOpType argument IsZeroVal to ZeroOrLdSrc. It's meant to
be true for loaded source (memcpy) or zero constants (memset). The poor name
choice is probably some kind of legacy issue.

llvm-svn: 169954
2012-12-12 01:32:07 +00:00
Evan Cheng
0e6ff04636 Avoid using lossy load / stores for memcpy / memset expansion. e.g.
f64 load / store on non-SSE2 x86 targets.

llvm-svn: 169944
2012-12-12 00:42:09 +00:00
Patrik Hagglund
caaedc6ade Revert EVT->MVT changes, r169836-169851, due to buildbot failures.
llvm-svn: 169854
2012-12-11 11:14:33 +00:00
Patrik Hagglund
f45125a118 Change TargetLowering::getTypeForExtArgOrReturn to take and return
MVTs, instead of EVTs.

Accordingly, add bitsLT (and similar) to MVT.

llvm-svn: 169850
2012-12-11 10:20:51 +00:00
Patrik Hagglund
9597517d65 Change TargetLowering::findRepresentativeClass to take an MVT, instead
of EVT.

llvm-svn: 169845
2012-12-11 09:57:18 +00:00
Evan Cheng
86dd733bc8 Some enhancements for memcpy / memset inline expansion.
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
   bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
   x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
   if it's not possible to materialize an integer immediate with a single
   instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
   are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
   Also increase the threshold to something reasonable (8 for memset, 4 pairs
   for memcpy).

This significantly improves Dhrystone, up to 50% on ARM iOS devices.

rdar://12760078

llvm-svn: 169791
2012-12-10 23:21:26 +00:00
Shuxin Yang
7221b14d96 - Re-enable population count loop idiom recognization
- fix a bug which cause sigfault.
- add two testing cases which was causing crash

llvm-svn: 169687
2012-12-09 03:12:46 +00:00
Chandler Carruth
329a5c1e03 Revert the patches adding a popcount loop idiom recognition pass.
There are still bugs in this pass, as well as other issues that are
being worked on, but the bugs are crashers that occur pretty easily in
the wild. Test cases have been sent to the original commit's review
thread.

This reverts the commits:
  r169671: Fix a logic error.
  r169604: Move the popcnt tests to an X86 subdirectory.
  r168931: Initial commit adding the pass.

llvm-svn: 169683
2012-12-08 22:18:29 +00:00
Bill Wendling
3f153ce37b s/AttrListPtr/AttributeSet/g to better label what this class is going to be in the near future.
llvm-svn: 169651
2012-12-07 23:16:57 +00:00
Nadav Rotem
d3d6e7cfb2 When we use the BLEND instruction that uses the MSB as a mask, we can remove
the VSRI instruction before it since it does not affect the MSB.

Thanks Craig Topper for suggesting this.

llvm-svn: 169638
2012-12-07 21:43:11 +00:00
Nadav Rotem
fc6e3e3272 X86: Prefer using VPSHUFD over VPERMIL because it has better throughput.
llvm-svn: 169624
2012-12-07 19:01:13 +00:00
Evan Cheng
4cdc6c4eef Replace r169459 with something safer. Rather than having computeMaskedBits to
understand target implementation of any_extend / extload, just generate
zero_extend in place of any_extend for liveouts when the target knows the
zero_extend will be implicit (e.g. ARM ldrb / ldrh) or folded (e.g. x86 movz).

rdar://12771555

llvm-svn: 169536
2012-12-06 19:13:27 +00:00
Jakub Staszak
6a77423b5f Remove unneeded function, since PR8156 was fixed over a year ago.
llvm-svn: 169534
2012-12-06 19:05:46 +00:00
Jakub Staszak
b18925a384 Simplify code.
llvm-svn: 169521
2012-12-06 18:22:59 +00:00
Evan Cheng
c1db873871 Let targets provide hooks that compute known zero and ones for any_extend
and extload's. If they are implemented as zero-extend, or implicitly
zero-extend, then this can enable more demanded bits optimizations. e.g.

define void @foo(i16* %ptr, i32 %a) nounwind {
entry:
  %tmp1 = icmp ult i32 %a, 100
  br i1 %tmp1, label %bb1, label %bb2
bb1:
  %tmp2 = load i16* %ptr, align 2
  br label %bb2
bb2:
  %tmp3 = phi i16 [ 0, %entry ], [ %tmp2, %bb1 ]
  %cmp = icmp ult i16 %tmp3, 24
  br i1 %cmp, label %bb3, label %exit
bb3:
  call void @bar() nounwind
  br label %exit
exit:
  ret void
}

This compiles to the followings before:
        push    {lr}
        mov     r2, #0
        cmp     r1, #99
        bhi     LBB0_2
@ BB#1:                                 @ %bb1
        ldrh    r2, [r0]
LBB0_2:                                 @ %bb2
        uxth    r0, r2
        cmp     r0, #23
        bhi     LBB0_4
@ BB#3:                                 @ %bb3
        bl      _bar
LBB0_4:                                 @ %exit
        pop     {lr}
        bx      lr

The uxth is not needed since ldrh implicitly zero-extend the high bits. With
this change it's eliminated.

rdar://12771555

llvm-svn: 169459
2012-12-06 01:28:01 +00:00
Elena Demikhovsky
fa1ab36215 Simplified BLEND pattern matching for shuffles.
Generate VPBLENDD for AVX2 and VPBLENDW for v16i16 type on AVX2.

llvm-svn: 169366
2012-12-05 09:24:57 +00:00
Evan Cheng
f87900c4b1 Add x86 isel lowering logic to form bit test with inverted condition. e.g.
x ^ -1.

Patch by David Majnemer.
rdar://12755626

llvm-svn: 169339
2012-12-05 00:10:38 +00:00
Chandler Carruth
a490793037 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Shuxin Yang
a7c032d8b5 rdar://12100355 (part 1)
This revision attempts to recognize following population-count pattern:

 while(a) { c++; ... ; a &= a - 1; ... },
  where <c> and <a>could be used multiple times in the loop body.

 TODO: On X8664 and ARM, __buildin_ctpop() are not expanded to a efficent 
instruction sequence, which need to be improved in the following commits.

Reviewed by Nadav, really appreciate!

llvm-svn: 168931
2012-11-29 19:38:54 +00:00
Elena Demikhovsky
98c09d42c5 I changed hasAVX() to hasFp256() and hasAVX2() to hasInt256() in X86IselLowering.cpp.
The logic was not changed, only names.

llvm-svn: 168875
2012-11-29 12:44:59 +00:00
Jakub Staszak
12c307c8fd Normalize splat 256bit vectors with 8 elements.
llvm-svn: 168600
2012-11-26 19:24:31 +00:00
Craig Topper
3ffe8c22c3 Mark ISD::FMA as Legal instead of custom for x86 with FMA3/FMA4. Needed so that llvm.muladd can be converted to ISD::FMA for fp_contract.
llvm-svn: 168413
2012-11-21 05:36:24 +00:00
Duncan Sands
98b6a4f4b5 Add the Erlang/HiPE calling convention, patch by Yiannis Tsiouris.
llvm-svn: 168166
2012-11-16 12:36:39 +00:00
Craig Topper
ad33f996a6 Use roundps/pd for llvm.ceil, llvm.trunc, llvm.rint, and llvm.nearbyint of vector types.
llvm-svn: 168141
2012-11-16 06:37:56 +00:00
Craig Topper
216a5138e7 Add llvm.ceil, llvm.trunc, llvm.rint, llvm.nearbyint intrinsics.
llvm-svn: 168025
2012-11-15 06:51:10 +00:00
Benjamin Kramer
0006b33581 X86: Enable SSE memory intrinsics even when stack alignment is less than 16 bytes.
The stack realignment code was fixed to work when there is stack realignment and
a dynamic alloca is present so this shouldn't cause correctness issues anymore.

Note that this also enables generation of AVX instructions for memset
under the assumptions:
- Unaligned loads/stores are always fast on CPUs supporting AVX
- AVX is not slower than SSE
We may need some tweaked heuristics if one of those assumptions turns out not to
be true.

Effectively reverts r58317. Part of PR2962.

llvm-svn: 167967
2012-11-14 20:08:40 +00:00
Craig Topper
c13d6d998c Factor out an overly replicated typecast. No functional change.
llvm-svn: 167916
2012-11-14 06:41:09 +00:00
Manman Ren
e98ec5dd77 X86: when constructing VZEXT_LOAD from other loads, makes sure its output
chain is correctly setup.

As an example, if the original load must happen before later stores, we need
to make sure the constructed VZEXT_LOAD is constrained to be before the stores.

rdar://12684358

llvm-svn: 167859
2012-11-13 19:13:05 +00:00
Michael Liao
5bf5c77881 Fix PR14314
- Fix operand order for atomic sub, where the minuend is the value
  loaded from memory and the subtrahend is the parameter specified.

llvm-svn: 167718
2012-11-12 06:49:17 +00:00
Craig Topper
e9616c2f3c Move some helper methods to being static functions in the implementation file.
llvm-svn: 167696
2012-11-11 22:45:02 +00:00
Craig Topper
f8089e0864 Remove unnecessary subtraction and addition by 1 around a couple for loops.
llvm-svn: 167673
2012-11-10 09:25:36 +00:00
Craig Topper
f04d47d0ca Tidy up spacing. No functional change.
llvm-svn: 167671
2012-11-10 09:02:47 +00:00
Craig Topper
adebc23cac Simplify custom emitter code for pcmp(e/i)str(i/m) and make the helper functions static.
llvm-svn: 167669
2012-11-10 08:57:41 +00:00
Craig Topper
f424da6ff9 Cleanup pcmp(e/i)str(m/i) instruction definitions and load folding support.
llvm-svn: 167652
2012-11-10 01:23:36 +00:00
Nadav Rotem
41e4da88d8 indent
llvm-svn: 167607
2012-11-09 07:02:24 +00:00
Michael Liao
59114df23b Add support of RTM from TSX extension
- Add RTM code generation support throught 3 X86 intrinsics:
  xbegin()/xend() to start/end a transaction region, and xabort() to abort a
  tranaction region

llvm-svn: 167573
2012-11-08 07:28:54 +00:00
Jakub Staszak
473b6f7d13 Simplify code. No functionality change.
llvm-svn: 167505
2012-11-06 23:52:19 +00:00
Nadav Rotem
37812539c2 Make the helper functions static. No functional change.
llvm-svn: 167501
2012-11-06 23:36:00 +00:00
Nadav Rotem
dce9a7a599 CostModel: add another known vector trunc optimization.
llvm-svn: 167488
2012-11-06 21:17:17 +00:00
Nadav Rotem
2fb5dc3a15 Cost Model: add tables for some avx type-conversion hacks.
llvm-svn: 167480
2012-11-06 19:33:53 +00:00
Nadav Rotem
514a3cb69c Refactor the getTypeLegalizationCost interface. No functionality change.
llvm-svn: 167422
2012-11-05 23:57:45 +00:00
Nadav Rotem
890d7c7f8e CostModel: Add tables for the common x86 compares.
llvm-svn: 167421
2012-11-05 23:48:20 +00:00
Richard Smith
0befa1c4e0 Suppress signed/unsigned comparison warning.
llvm-svn: 167410
2012-11-05 22:01:44 +00:00
Nadav Rotem
04d64771f6 Cost Model: Normalize the insert/extract index when splitting types
llvm-svn: 167402
2012-11-05 21:12:13 +00:00
Nadav Rotem
4def3aace5 Implement the cost of abnormal x86 instruction lowering as a table.
llvm-svn: 167395
2012-11-05 19:32:46 +00:00
Nadav Rotem
c9bbabd5e9 X86 CostModel: Add support for a some of the common arithmetic instructions for SSE4, AVX and AVX2.
llvm-svn: 167347
2012-11-03 00:39:56 +00:00
Chandler Carruth
0a6b99ee2b Revert the majority of the next patch in the address space series:
r165941: Resubmit the changes to llvm core to update the functions to
         support different pointer sizes on a per address space basis.

Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.

However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.

In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.

In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.

This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.

llvm-svn: 167222
2012-11-01 09:14:31 +00:00
Shuxin Yang
4762eb1825 (For X86) Enhancement to add-carray/sub-borrow (adc/sbb) optimization.
The adc/sbb optimization is to able to convert following expression
into a single adc/sbb instruction:
  (ult) ... = x + 1 // where the ult is unsigned-less-than comparison
  (ult) ... = x - 1

  This change is to flip the "x >u y" (i.e. ugt comparison) in order 
to expose the adc/sbb opportunity.

llvm-svn: 167180
2012-10-31 23:11:48 +00:00
Michael Liao
299b55458f Clean up redundant SP register maintained in X86 TLI
llvm-svn: 167104
2012-10-31 04:14:09 +00:00
Manman Ren
584c3daf8d X86 MMX: optimize transfer from mmx to i32
We used to generate a store (movq) + a load.
Now we use movd.

rdar://9946746

llvm-svn: 167056
2012-10-30 22:15:38 +00:00
Jakub Staszak
f1cddf738b Re-commit r166971. I reverted it to quickly, when buildbots didn't have a chance
to test it with chapni's fix (-mattr=+avx).

llvm-svn: 166985
2012-10-30 00:01:57 +00:00
Jakub Staszak
ce95e4429f Revert r166971. It causes buildbot failure. To be investigated.
llvm-svn: 166979
2012-10-29 23:13:50 +00:00
Jakub Staszak
51f21a007f Remove unused variable.
llvm-svn: 166973
2012-10-29 22:04:32 +00:00
Jakub Staszak
6067106145 Simplify code. No functionality change.
llvm-svn: 166972
2012-10-29 22:02:26 +00:00
Jakub Staszak
ded6f21890 Allow to fold vector load if there is more than one bitcast, so in the case:
%0 = load <8 x i16>* %dest
%1 = shufflevector <8 x i16> %0, <8 x i16> %in,
      <8 x i32> < i32 0, i32 1, i32 2, i32 3, i32 13, i32 undef, i32 14, i32 14>
store <8 x i16> %1, <8 x i16>* %dest

We get:
  vmovlpd (%eax), %xmm0, %xmm0

instead of:
  vmovaps (%eax), %xmm1
  vmovsd  %xmm1, %xmm0, %xmm0

No extra test-case is added. I just fixed the existing one
(also it uses FileCheck now).

llvm-svn: 166971
2012-10-29 21:56:35 +00:00
Duncan Sands
9234853009 Silence a GCC warning about comparing signed and unsigned types.
llvm-svn: 166922
2012-10-29 11:29:53 +00:00
Michael Liao
d10b6df7c1 Clean up where SlotSize should be used instead of pointer size.
llvm-svn: 166664
2012-10-25 06:29:14 +00:00
Michael Liao
70bb8004bd Add custom conversion from v2u32 to v2f32 in 32-bit mode
- As there's no 64-bit GPRs in 32-bit mode, a custom conversion from v2u32 to
  v2f32 is added to improve the efficiency of the code generated.

llvm-svn: 166545
2012-10-24 04:09:32 +00:00
Michael Liao
7ca099f727 Fix PR14161
- Check index being extracted to be constant 0 before simplfiying.
  Otherwise, retain the original sequence.

llvm-svn: 166504
2012-10-23 21:40:15 +00:00
Matt Beaumont-Gay
dc9dc4a3e5 Silence -Wsign-compare
llvm-svn: 166494
2012-10-23 19:46:36 +00:00
Michael Liao
a21e82adef Add custom UINT_TO_FP from v4i8/v4i16/v8i8/v8i16 to v4f32/v8f32
- Replace v4i8/v8i8 -> v8f32 DAG combine with custom lowering to reduce
  DAG combine overhead.
- Extend the support to v4i16/v8i16 as well.

llvm-svn: 166487
2012-10-23 17:36:08 +00:00
Michael Liao
23c890e0a8 Enable lowering ZERO_EXTEND/ANY_EXTEND to PMOVZX from SSE4.1
llvm-svn: 166486
2012-10-23 17:34:00 +00:00
Shuxin Yang
3ad15929e7 This patch is to fix radar://8426430. It is about llvm support of __builtin_debugtrap()
which is supposed to consistently raise SIGTRAP across all systems. In contrast,
__builtin_trap() behave differently on different systems. e.g. it raises SIGTRAP on ARM, and
SIGILL on X86. The purpose of __builtin_debugtrap() is to consistently provide "trap"
functionality, in the mean time preserve the compatibility with on gcc on __builtin_trap().

  The X86 backend is already able to handle debugtrap(). This patch is to:
  1) make front-end recognize "__builtin_debugtrap()" (emboddied in the one-line change to Clang).
  2) In DAG legalization phase, by default, "debugtrap" will be replaced with "trap", which
     make the __builtin_debugtrap() "available" to all existing ports without the hassle of
     changing their code.
  3) If trap-function is specified (via -trap-func=xyz to llc), both __builtin_debugtrap() and
     __builtin_trap() will be expanded into the function call of the specified trap function.
    This behavior may need change in the future.

  The provided testing-case is to make sure 2) and 3) are working for ARM port, and we
already have a testing case for x86. 

llvm-svn: 166300
2012-10-19 20:11:16 +00:00
Michael Liao
9984a3bee3 Lower BUILD_VECTOR to SHUFFLE + INSERT_VECTOR_ELT for X86
- If INSERT_VECTOR_ELT is supported (above SSE2, either by custom
  sequence of legal insn), transform BUILD_VECTOR into SHUFFLE +
  INSERT_VECTOR_ELT if most of elements could be built from SHUFFLE with few
  (so far 1) elements being inserted.

llvm-svn: 166288
2012-10-19 17:15:18 +00:00
Michael Liao
f5ea791113 Check SSSE3 instead of SSE4.1
- All shuffle insns required, especially PSHUB, are added in SSSE3.

llvm-svn: 166086
2012-10-17 03:59:18 +00:00
Michael Liao
db8bc2e5dc Fix setjmp on models with non-Small code model nor non-Static relocation model
- MBB address is only valid as an immediate value in Small & Static
  code/relocation models. On other models, LEA is needed to load IP address of
  the restore MBB.
- A minor fix of MBB in MC lowering is added as well to enable target
  relocation flag being propagated into MC.

llvm-svn: 166084
2012-10-17 02:22:27 +00:00
Michael Liao
a5c31648c0 Support v8f32 to v8i8/vi816 conversion through custom lowering
- Add custom FP_TO_SINT on v8i16 (and v8i8 which is legalized as v8i16 due to
  vector element-wise widening) to reduce DAG combiner and its overhead added
  in X86 backend.

llvm-svn: 166036
2012-10-16 18:14:11 +00:00
NAKAMURA Takumi
83458d4d01 Reapply r165661, Patch by Shuxin Yang <shuxin.llvm@gmail.com>.
Original message:

The attached is the fix to radar://11663049. The optimization can be outlined by following rules:

   (select (x != c), e, c) -> select (x != c), e, x),
   (select (x == c), c, e) -> select (x == c), x, e)
where the <c> is an integer constant.

 The reason for this change is that : on x86, conditional-move-from-constant needs two instructions;
however, conditional-move-from-register need only one instruction.

  While the LowerSELECT() sounds to be the most convenient place for this optimization, it turns out to be a bad place. The reason is that by replacing the constant <c> with a symbolic value, it obscure some instruction-combining opportunities which would otherwise be very easy to spot. For that reason, I have to postpone the change to last instruction-combining phase.

  The change passes the test of "make check-all -C <build-root/test" and "make -C project/test-suite/SingleSource".

Original message since r165661:

My previous change has a bug: I negated the condition code of a CMOV, and go ahead creating a new CMOV using the *ORIGINAL* condition code.

llvm-svn: 166017
2012-10-16 06:28:34 +00:00
Michael Liao
a7e5913fde Add __builtin_setjmp/_longjmp supprt in X86 backend
- Besides used in SjLj exception handling, __builtin_setjmp/__longjmp is also
  used as a light-weight replacement of setjmp/longjmp which are used to
  implementation continuation, user-level threading, and etc. The support added
  in this patch ONLY addresses this usage and is NOT intended to support SjLj
  exception handling as zero-cost DWARF exception handling is used by default
  in X86.

llvm-svn: 165989
2012-10-15 22:39:43 +00:00
Micah Villmow
272663afc2 Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis.
llvm-svn: 165941
2012-10-15 16:24:29 +00:00
Benjamin Kramer
35d6fcd24a X86: Fix accidentally swapped operands.
llvm-svn: 165871
2012-10-13 12:50:19 +00:00
Benjamin Kramer
a348e5aa65 X86: Promote i8 cmov when both operands are coming from truncates of the same width.
X86 doesn't have i8 cmovs so isel would emit a branch. Emitting branches at this
level is often not a good idea because it's too late for many optimizations to
kick in. This solution doesn't add any extensions (truncs are free) and tries
to avoid introducing partial register stalls by filtering direct copyfromregs.

I'm seeing a ~10% speedup on reading a random .png file with libpng15 via
graphicsmagick on x86_64/westmere, but YMMV depending on the microarchitecture.

llvm-svn: 165868
2012-10-13 10:39:49 +00:00
Micah Villmow
4eb108750d Revert 165732 for further review.
llvm-svn: 165747
2012-10-11 21:27:41 +00:00
Micah Villmow
d8b76fdc50 Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
llvm-svn: 165726
2012-10-11 17:21:41 +00:00
NAKAMURA Takumi
1a341a8de1 Revert r165661, "Patch by Shuxin Yang <shuxin.llvm@gmail.com>."
It broke stage2 clang and test-suite/MultiSource/Benchmarks/mediabench/g721/g721encode.

llvm-svn: 165692
2012-10-11 02:02:05 +00:00
Evan Cheng
b84fe4f16a Change MachineInstrBuilder::addDisp to copy over target flags by default.
llvm-svn: 165677
2012-10-11 00:15:48 +00:00
Nadav Rotem
303a9b2e50 Patch by Shuxin Yang <shuxin.llvm@gmail.com>.
Original message:

The attached is the fix to radar://11663049. The optimization can be outlined by following rules:

   (select (x != c), e, c) -> select (x != c), e, x),
   (select (x == c), c, e) -> select (x == c), x, e)
where the <c> is an integer constant.

 The reason for this change is that : on x86, conditional-move-from-constant needs two instructions;
however, conditional-move-from-register need only one instruction.

  While the LowerSELECT() sounds to be the most convenient place for this optimization, it turns out to be a bad place. The reason is that by replacing the constant <c> with a symbolic value, it obscure some instruction-combining opportunities which would otherwise be very easy to spot. For that reason, I have to postpone the change to last instruction-combining phase.

  The change passes the test of "make check-all -C <build-root/test" and "make -C project/test-suite/SingleSource".

llvm-svn: 165661
2012-10-10 21:31:55 +00:00
Michael Liao
6a09ff62ba Add support for FP_ROUND from v2f64 to v2f32
- Due to the current matching vector elements constraints in
  ISD::FP_ROUND, rounding from v2f64 to v4f32 (after legalization from
  v2f32) is scalarized. Add a customized v2f32 widening to convert it
  into a target-specific X86ISD::VFPROUND to work around this
  constraints.

llvm-svn: 165631
2012-10-10 16:53:28 +00:00
Michael Liao
c434bfd7e4 Add alternative support for FP_ROUND from v2f32 to v2f64
- Due to the current matching vector elements constraints in ISD::FP_EXTEND,
  rounding from v2f32 to v2f64 is scalarized. Add a customized v2f32 widening
  to convert it into a target-specific X86ISD::VFPEXT to work around this
  constraints. This patch also reverts a previous attempt to fix this issue by
  recovering the scalarized ISD::FP_EXTEND pattern and thus significantly
  reduces the overhead of supporting non-power-2 vector FP extend.

llvm-svn: 165625
2012-10-10 16:32:15 +00:00
Evan Cheng
7aa64222c5 When expanding atomic load arith instructions, do not lose target flags. rdar://12453106
llvm-svn: 165568
2012-10-09 23:48:33 +00:00
Bill Wendling
b53357de39 Create enums for the different attributes.
We use the enums to query whether an Attributes object has that attribute. The
opaque layer is responsible for knowing where that specific attribute is stored.

llvm-svn: 165488
2012-10-09 07:45:08 +00:00
Micah Villmow
bb1a25cd67 Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
Preston Gurd
0256511b5d This patch corrects commit 165126 by using an integer bit width instead of
a pointer to a type, in order to remove the uses of getGlobalContext().

Patch by Tyler Nowicki.

llvm-svn: 165255
2012-10-04 21:33:40 +00:00
Michael Liao
8d00f5b6e3 Add register encoding support in X86 backend
- Add 'HwEncoding' for X86 registers and call getEncodingValue() to
  retrieve their encoding values.
- This's the first step to adopt new scheme. Furthur revising is onging.

llvm-svn: 165241
2012-10-04 19:50:43 +00:00
Bill Wendling
3535cd36c8 Use new accessor methods to query for attributes.
llvm-svn: 165205
2012-10-04 06:43:21 +00:00
Michael Liao
2a6152886a Clean up tailing whitespaces
llvm-svn: 165182
2012-10-03 23:43:52 +00:00
Craig Topper
a970c6ab45 Change getX86SubSuperRegister to take an MVT::SimpleValueType rather than an EVT and add llvm_unreachable to the switches. Helps it compile to dramatically better code.
llvm-svn: 164919
2012-09-30 19:49:56 +00:00
Bill Wendling
92f3ab845d Remove the `hasFnAttr' method from Function.
The hasFnAttr method has been replaced by querying the Attributes explicitly. No
intended functionality change.

llvm-svn: 164725
2012-09-26 21:48:26 +00:00
Michael Liao
b50e89ddce Add missing i64 max/min/umax/umin on 32-bit target
- Turn on atomic6432.ll and add specific test case as well

llvm-svn: 164616
2012-09-25 18:08:13 +00:00
Evan Cheng
a5b2ee52c9 Fix an illegal tailcall opt where the callee returns a double via xmm while caller returns x86_fp80 via st0. rdar://12229511
llvm-svn: 164588
2012-09-25 05:32:34 +00:00
Michael Liao
2197b133f8 Add missing i8 max/min/umax/umin support
- Fix PR5145 and turn on test 8-bit atomic ops

llvm-svn: 164358
2012-09-21 03:18:52 +00:00
Michael Liao
439a9cea68 Revise td of X86 atomic instructions
- Rewirte most atomic instructions in templates for both better
  maintenance and future extensions, such as HLE in TSX.

llvm-svn: 164357
2012-09-21 03:00:17 +00:00
Michael Liao
34658dca78 Re-work X86 code generation of atomic ops with spin-loop
- Rewrite/merge pseudo-atomic instruction emitters to address the
  following issue:
  * Reduce one unnecessary load in spin-loop

    previously the spin-loop looks like

        thisMBB:
        newMBB:
          ld  t1 = [bitinstr.addr]
          op  t2 = t1, [bitinstr.val]
          not t3 = t2  (if Invert)
          mov EAX = t1
          lcs dest = [bitinstr.addr], t3  [EAX is implicit]
          bz  newMBB
          fallthrough -->nextMBB

    the 'ld' at the beginning of newMBB should be lift out of the loop
    as lcs (or CMPXCHG on x86) will load the current memory value into
    EAX. This loop is refined as:

        thisMBB:
          EAX = LOAD [MI.addr]
        mainMBB:
          t1 = OP [MI.val], EAX
          LCMPXCHG [MI.addr], t1, [EAX is implicitly used & defined]
          JNE mainMBB
        sinkMBB:

  * Remove immopc as, so far, all pseudo-atomic instructions has
    all-register form only, there is no immedidate operand.

  * Remove unnecessary attributes/modifiers in pseudo-atomic instruction
    td

  * Fix issues in PR13458

- Add comprehensive tests on atomic ops on various data types.
  NOTE: Some of them are turned off due to missing functionality.

- Revise tests due to the new spin-loop generated.

llvm-svn: 164281
2012-09-20 03:06:15 +00:00
Benjamin Kramer
3be8d89f89 X86: Emitting x87 fsin/fcos for sinf/cosf is not safe without unsafe fp math.
This was only an issue if sse is disabled.

llvm-svn: 163967
2012-09-15 12:44:27 +00:00
Michael Liao
5eea004951 Fix comment
llvm-svn: 163835
2012-09-13 20:30:16 +00:00
Michael Liao
0c0da113c5 Add wider vector/integer support for PR12312
- Enhance the fix to PR12312 to support wider integer, such as 256-bit
  integer. If more than 1 fully evaluated vectors are found, POR them
  first followed by the final PTEST.

llvm-svn: 163832
2012-09-13 20:24:54 +00:00
Michael Liao
e600a8a616 Fix PR11985
- BlockAddress has no support of BA + offset form and there is no way to
  propagate that offset into machine operand;
- Add BA + offset support and a new interface 'getTargetBlockAddress' to
  simplify target block address forming;
- All targets are modified to use new interface and X86 backend is enhanced to
  support BA + offset addressing.

llvm-svn: 163743
2012-09-12 21:43:09 +00:00
Craig Topper
24d6cafc79 Indentation fixes. No functional change.
llvm-svn: 163682
2012-09-12 06:20:41 +00:00
Craig Topper
557b8a5a81 Make a bunch of lowering helper functions static instead of member functions. No functional change.
llvm-svn: 163596
2012-09-11 06:15:32 +00:00
Dmitri Gribenko
1d75adbbb2 Remove redundant semicolons which are null statements.
llvm-svn: 163547
2012-09-10 21:26:47 +00:00
Michael Liao
7dfa5e2092 Enhance PR11334 fix to support extload from v2f32/v4f32
- Fix an remaining issue of PR11674 as well

llvm-svn: 163528
2012-09-10 18:33:51 +00:00
Michael Liao
2791a08d7e Add boolean simplification support from CMOV
- If a boolean value is generated from CMOV and tested as boolean value,
  simplify the use of test result by referencing the original condition.
  RDRAND intrinisc is one of such cases.

llvm-svn: 163516
2012-09-10 16:36:16 +00:00
Elena Demikhovsky
56cdc6a59a The VPSHUFB 256-bit instruction may be generated when one of input vector is undefined or zeroinitializer.
I've added the "zeroinitializer" case in this patch.

llvm-svn: 163506
2012-09-10 12:13:11 +00:00
Craig Topper
a91d731898 Add instruction selection for ffloor of vectors when SSE4.1 or AVX is enabled.
llvm-svn: 163473
2012-09-08 17:42:27 +00:00
Craig Topper
9bda7e421e Use 256-bit alignment for constant pool value for 256-bit vector FNEG lowering.
llvm-svn: 163463
2012-09-08 07:46:05 +00:00
Craig Topper
53ec08b4fc Add support for lowering FABS of vector types.
llvm-svn: 163461
2012-09-08 07:31:51 +00:00
Craig Topper
eb1db45675 Set operation action for FFLOOR to Expand for all vector types for X86. Set FFLOOR of v4f32 to Expand for ARM. v2f64 was already correct.
llvm-svn: 163458
2012-09-08 04:58:43 +00:00
Elena Demikhovsky
9339eef307 AVX2 optimization.
Added generation of VPSHUB instruction for <32 x i8> vector shuffle when possible.

llvm-svn: 163312
2012-09-06 12:42:01 +00:00
Michael Liao
290d2703fe Remove duplicated helper function
llvm-svn: 163295
2012-09-06 07:11:22 +00:00
Craig Topper
0b9e2dd7a7 Use iPTR instead of i32 for extract_subvector/insert_subvector index in lowering and patterns. This makes it consistent with the incoming DAG nodes from the DAG builder.
llvm-svn: 163293
2012-09-06 06:09:01 +00:00
Roman Divacky
85348270cd Stop casting away const qualifier needlessly.
llvm-svn: 163258
2012-09-05 22:26:57 +00:00
Preston Gurd
c80dc7d214 Generic Bypass Slow Div
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder,  or both.

Patch by Tyler Nowicki!

llvm-svn: 163150
2012-09-04 18:22:17 +00:00
Elena Demikhovsky
61924c155d This patch optimizes shuffle instruction - generates 2 instructions instead of 4.
Since this specific shuffle is widely used in many workloads we have ~10% performance on them.

shufflevector <8 x float> %A, <8 x float> %B, <8 x i32> <i32 0, i32 8, i32 2, i32 10, i32 4, i32 12, i32 6, i32 14>

vmovaps (%rdx), %ymm0
vshufps $8, %ymm0, %ymm0, %ymm0
vmovaps (%rcx), %ymm1
vshufps $8, %ymm0, %ymm1, %ymm1
vunpcklps       %ymm0, %ymm1, %ymm0

vmovaps (%rcx), %ymm0
vmovsldup       (%rdx), %ymm1
vblendps        $85, %ymm0, %ymm1, %ymm0

llvm-svn: 163134
2012-09-04 12:49:02 +00:00
Craig Topper
0791e3f380 Typos
llvm-svn: 163053
2012-09-01 06:33:50 +00:00
Manman Ren
9afdad8207 SelectionDAG: when constructing VZEXT_LOAD from other loads, make sure its
output chain is correctly setup.

As an example, if the original load must happen before later stores, we need
to make sure the constructed VZEXT_LOAD is constrained to be before the stores.

rdar://11457792

llvm-svn: 163036
2012-08-31 23:16:57 +00:00
Michael Liao
6f4b3f358d Fix PR12359
- In addition to undefined, if V2 is zero vector, skip 2nd PSHUFB and POR as
  well as PSHUFB will zero elements with negative indices.

  Patch by Sriram Murali <sriram.murali@intel.com>

llvm-svn: 163018
2012-08-31 20:12:31 +00:00