1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 20:43:44 +02:00
Commit Graph

6965 Commits

Author SHA1 Message Date
Jim Grosbach
4d945565f7 ARM: FMA is legal only if VFP4 is available.
rdar://13306723

llvm-svn: 176212
2013-02-27 21:31:12 +00:00
Manman Ren
894d0f9fc3 SelectionDAG: If llvm.donothing has a landingpad, we should clear
CurrentCallSite to avoid an assertion failure:
assert(MMI.getCurrentCallSite() == 0 && "Overlapping call sites!");

rdar://problem/13228754

llvm-svn: 176154
2013-02-27 02:11:57 +00:00
Bill Schmidt
5440b8eaca Fix PR15332 (patch by Florian Zeitz).
There's no need to generate a stack frame for PPC32 SVR4 when there are
no local variables assigned to the stack, i.e., when no red zone is needed.
(PPC64 supports a red zone, but PPC32 does not.)

llvm-svn: 176124
2013-02-26 21:28:57 +00:00
Chad Rosier
0370d957b0 Add a test case for r176066.
llvm-svn: 176119
2013-02-26 20:22:30 +00:00
Chad Rosier
3c39a1292b Remove a few unused arguments.
llvm-svn: 176109
2013-02-26 18:39:31 +00:00
Bill Schmidt
76befd83d4 Fix PR15359.
The PowerPC TLS relocation types were not previously added to the
necessary list in MCELFStreamer::fixSymbolsInTLSFixups().  Now they are!

llvm-svn: 176094
2013-02-26 16:41:03 +00:00
Kostya Serebryany
f560b78692 Unify clang/llvm attributes for asan/tsan/msan (LLVM part)
These are two related changes (one in llvm, one in clang).
LLVM: 
- rename address_safety => sanitize_address (the enum value is the same, so we preserve binary compatibility with old bitcode)
- rename thread_safety => sanitize_thread
- rename no_uninitialized_checks -> sanitize_memory

CLANG: 
- add __attribute__((no_sanitize_address)) as a synonym for __attribute__((no_address_safety_analysis))
- add __attribute__((no_sanitize_thread))
- add __attribute__((no_sanitize_memory))

for S in address thread memory
If -fsanitize=S is present and __attribute__((no_sanitize_S)) is not
set llvm attribute sanitize_S

llvm-svn: 176075
2013-02-26 06:58:09 +00:00
Michael Liao
ff7d7ec88b Fix PR10499
- Check whether SSE is available before lowering all 1s vector building with
  PCMPEQD, which is only available from SSE2

llvm-svn: 176058
2013-02-25 23:01:03 +00:00
Chad Rosier
f38b2c410b Remove extraneous attribute number.
llvm-svn: 176053
2013-02-25 22:06:05 +00:00
Chad Rosier
37142b6930 [fast-isel] Add X86FastIsel::FastLowerArguments to handle functions with 6 or
fewer scalar integer (i32 or i64) arguments. It completely eliminates the need
for SDISel for trivial functions.

Also, add the new llc -fast-isel-abort-args option, which is similar to
-fast-isel-abort option, but for formal argument lowering.

llvm-svn: 176052
2013-02-25 21:59:35 +00:00
Andrew Trick
9dd0c20307 pre-RA-sched fix: only reevaluate physreg interferences when necessary.
Fixes rdar:13279013: scheduler was blowing up on select instructions.

llvm-svn: 176037
2013-02-25 19:11:48 +00:00
Bill Schmidt
a7e4a58051 Fix missing relocation for TLS addressing peephole optimization.
Report and fix due to Kai Nacke.  Testcase update by me.

llvm-svn: 176029
2013-02-25 16:44:35 +00:00
Chandler Carruth
aea541125e Fix the root cause of PR15348 by correctly handling alignment 0 on
memory intrinsics in the SDAG builder.

When alignment is zero, the lang ref says that *no* alignment
assumptions can be made. This is the exact opposite of the internal API
contracts of the DAG where alignment 0 indicates that the alignment can
be made to be anything desired.

There is another, more explicit alignment that is better suited for the
role of "no alignment at all": an alignment of 1. Map the intrinsic
alignment to this early so that we don't end up generating aligned DAGs.

It is really terrifying that we've never seen this before, but we
suddenly started generating a large number of alignment 0 memcpys due to
the new code to do memcpy-based copying of POD class members. That patch
contains a bug that rounds bitfield alignments down when they are the
first field. This can in turn produce zero alignments.

This fixes weird crashes I've seen in library users of LLVM on 32-bit
hosts, etc.

llvm-svn: 176022
2013-02-25 14:20:21 +00:00
Nadav Rotem
0740239f87 Revert r169638 because it broke Mesa llvmpipe tests.
Fix PR15239.

llvm-svn: 175985
2013-02-24 07:09:35 +00:00
Benjamin Kramer
bdb1d9aad3 X86: Disable cmov-memory patterns on subtargets without cmov.
Fixes PR15115.

llvm-svn: 175962
2013-02-23 10:40:58 +00:00
Reed Kotler
65cb21ddd8 Expand pseudos/macros for Selt. This is the last of the complex
macros.The rest is some small misc. stuff.

llvm-svn: 175950
2013-02-23 03:09:56 +00:00
Akira Hatanaka
8f0f207217 [mips] Emit call16 operator instead of got_disp. The former allows lazy binding.
llvm-svn: 175920
2013-02-22 21:10:03 +00:00
Peter Collingbourne
276de50188 Fix test by matching movaps instead of AVX-only vmovaps
llvm-svn: 175914
2013-02-22 19:53:30 +00:00
Peter Collingbourne
7dc1ee08f5 x86_64: designate most general purpose and SSE registers as callee save under coldcc
llvm-svn: 175911
2013-02-22 19:19:44 +00:00
Pete Cooper
b4726c928e Remove unused CHECK lines copied from another test
llvm-svn: 175905
2013-02-22 18:16:21 +00:00
Kristof Beyls
a686678676 Make ARMAsmPrinter generate the correct alignment specifier syntax in instructions.
The Printer will now print instructions with the correct alignment specifier syntax, like
    vld1.8  {d16}, [r0:64]

llvm-svn: 175884
2013-02-22 10:01:33 +00:00
Reed Kotler
340c9d39ce Expand mips16 SelT form pseudso/macros.
llvm-svn: 175862
2013-02-22 05:10:51 +00:00
Pete Cooper
6da577a986 Fix isa<> check which could never be true.
It was incorrectly checking a Function* being an IntrinsicInst* which
isn't possible.  It should always have been checking the CallInst* instead.

Added test case for x86 which ensures we only get one constant load.
It was 2 before this change.

rdar://problem/13267920

llvm-svn: 175853
2013-02-22 01:50:38 +00:00
Anshuman Dasgupta
810cccb843 Hexagon: Expand cttz, ctlz, and ctpop for now.
llvm-svn: 175783
2013-02-21 19:39:40 +00:00
Jakob Stoklund Olesen
38b12c2ce2 Make RAFast::UsedInInstr indexed by register units.
This fixes some problems with too conservative checking where we were
marking all aliases of a register as used, and then also checking all
aliases when allocating a register.

<rdar://problem/13249625>

llvm-svn: 175782
2013-02-21 19:35:21 +00:00
Bill Schmidt
049ba390f5 Large code model support for PowerPC.
Large code model is identical to medium code model except that the
addis/addi sequence for "local" accesses is never used.  All accesses
use the addis/ld sequence.

The coding changes are straightforward; most of the patch is taken up
with creating variants of the medium model tests for large model.

llvm-svn: 175767
2013-02-21 17:12:27 +00:00
Benjamin Kramer
9de866701b DAGCombiner: Make the post-legalize vector op optimization more aggressive.
A legal BUILD_VECTOR goes in and gets constant folded into another legal
BUILD_VECTOR so we don't lose any legality here. The problematic PPC
optimization that made this check necessary was fixed recently.

llvm-svn: 175759
2013-02-21 15:24:35 +00:00
Tom Stellard
aa63f0e8d4 R600: Fix for Unigine when MachineSched is enabled
Fixes for-loop.cl piglit test

Patch By: Vincent Lejeune

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

NOTE: This is a candidate for the Mesa stable branch.
llvm-svn: 175742
2013-02-21 15:06:59 +00:00
Michel Danzer
756af8b106 R600/SI: Make sure M0 is loaded for V_INTERP_MOV_F32
NOTE: This is a candidate for the Mesa stable branch.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 175733
2013-02-21 08:57:10 +00:00
Reed Kotler
276bb6b70b Expand the sel pseudo/macro. This generates basic blocks where previously
there were inline br .+4 instructions. Soon everything can enjoy the
full instruction scheduling experience.

llvm-svn: 175718
2013-02-21 04:22:38 +00:00
Bill Schmidt
0e7935e723 PPCDAGToDAGISel::PostprocessISelDAG()
This patch implements the PPCDAGToDAGISel::PostprocessISelDAG virtual
method to perform post-selection peephole optimizations on the DAG
representation.

One optimization is implemented here:  folds to clean up complex
addressing expressions for thread-local storage and medium code
model.  It will also be useful for large code model sequences when
those are added later.  I originally thought about doing this on the
MI representation prior to register assignment, but it's difficult to
do effective global dead code elimination at that point.  DCE is
trivial on the DAG representation.

A typical example of a candidate code sequence in assembly:

   addis 3, 2, globalvar@toc@ha
   addi  3, 3, globalvar@toc@l
   lwz   5, 0(3)

When the final instruction is a load or store with an immediate offset
of zero, the offset from the add-immediate can replace the zero,
provided the relocation information is carried along:

   addis 3, 2, globalvar@toc@ha
   lwz   5, globalvar@toc@l(3)

Since the addi can in general have multiple uses, we need to only
delete the instruction when the last use is removed.

llvm-svn: 175697
2013-02-21 00:38:25 +00:00
Bill Schmidt
9e8b42e2f9 Stabilize vec_constants.ll
llvm-svn: 175683
2013-02-20 22:43:03 +00:00
Arnold Schwaighofer
170d2a8c25 DAGCombiner: Fold pointless truncate, bitcast, buildvector series
(2xi32) (truncate ((2xi64) bitcast (buildvector i32 a, i32 x, i32 b, i32 y)))
can be folded into a (2xi32) (buildvector i32 a, i32 b).

Such a DAG would cause uneccessary vdup instructions followed by vmovn
instructions.

We generate this code on ARM NEON for a setcc olt, 2xf64, 2xf64. For example, in
the vectorized version of the code below.

double A[N];
double B[N];

void test_double_compare_to_double() {
  int i;
  for(i=0;i<N;i++)
    A[i] = (double)(A[i] < B[i]);
}

radar://13191881

Fixes bug 15283.

llvm-svn: 175670
2013-02-20 21:33:32 +00:00
Bill Schmidt
bcb4fa48fa Additional fixes for bug 15155.
This handles the cases where the 6-bit splat element is odd, converting
to a three-instruction sequence to add or subtract two splats.  With this
fix, the XFAIL in test/CodeGen/PowerPC/vec_constants.ll is removed.

llvm-svn: 175663
2013-02-20 20:41:42 +00:00
Michael Liao
a500005adc Fix PR15267
- When extloading from a vector with non-byte-addressable element, e.g.
  <4 x i1>, the current logic breaks. Extend the current logic to
  fix the case where the element type is not byte-addressable by loading
  all bytes, bit-extracting/packing each element.

llvm-svn: 175642
2013-02-20 18:04:21 +00:00
Bill Schmidt
358367c60f Fix bug 14779 for passing anonymous aggregates [patch by Kai Nacke].
The PPC backend doesn't handle these correctly.  This patch uses logic
similar to that in the X86 and ARM backends to track these arguments
properly.

llvm-svn: 175635
2013-02-20 17:31:41 +00:00
Jyotsna Verma
84136133e3 Hexagon: Move HexagonMCInst.h to MCTargetDesc/HexagonMCInst.h.
Add HexagonMCInst class which adds various Hexagon VLIW annotations.
In addition, this class also includes some APIs related to the
constant extenders.

llvm-svn: 175634
2013-02-20 16:13:27 +00:00
Bill Schmidt
93b2fc9f50 Fix PR15155: lost vadd/vsplat optimization.
During lowering of a BUILD_VECTOR, we look for opportunities to use a
vector splat.  When the splatted value fits in 5 signed bits, a single
splat does the job.  When it doesn't fit in 5 bits but does fit in 6,
and is an even value, we can splat on half the value and add the result
to itself.

This last optimization hasn't been working recently because of improved
constant folding.  To circumvent this, create a pseudo VADD_SPLAT that
can be expanded during instruction selection.

llvm-svn: 175632
2013-02-20 15:50:31 +00:00
Elena Demikhovsky
0886fb4d55 I optimized the following patterns:
sext <4 x i1> to <4 x i64>
 sext <4 x i8> to <4 x i64>
 sext <4 x i16> to <4 x i64>
 
I'm running Combine on SIGN_EXTEND_IN_REG and revert SEXT patterns:
 (sext_in_reg (v4i64 anyext (v4i32 x )), ExtraVT) -> (v4i64 sext (v4i32 sext_in_reg (v4i32 x , ExtraVT)))
 
 The sext_in_reg (v4i32 x) may be lowered to shl+sar operations.
 The "sar" does not exist on 64-bit operation, so lowering sext_in_reg (v4i64 x) has no vector solution.

I also added a cost of this operations to the AVX costs table.

llvm-svn: 175619
2013-02-20 12:42:54 +00:00
Logan Chien
740a4514e2 Fix thumbv5e frame lowering assertion failure.
It is possible that frame pointer is not found in the
callee saved info, thus FramePtrSpillFI may be incorrect
if we don't check the result of hasFP(MF).

Besides, if we enable the stack coloring algorithm, there
will be an assertion to ensure the slot is live.  But in
the test case, %var1 is not live in the prologue of the
function, and we will get the assertion failure.

Note: There is similar code in ARMFrameLowering.cpp.
llvm-svn: 175616
2013-02-20 12:21:33 +00:00
Reed Kotler
030e941124 Expand pseudos/macros:
SltCCRxRy16, SltiCCRxImmX16, SltiuCCRxImmX16, SltuCCRxRy16
$T8 shows up as register $24 when emitted from C++ code so we had
to change some tests that were already there for this functionality.

llvm-svn: 175593
2013-02-20 05:45:15 +00:00
Chad Rosier
2be41be7b9 [ms-inline asm] Force the use of a base pointer if the MachineFunction includes
MS-style inline assembly.

This is a follow-on to r175334.  Forcing a FP to be emitted doesn't ensure it
will be used.  Therefore, force the base pointer as well.  We now treat MS
inline assembly in the same way we treat functions with dynamic stack
realignment and VLAs.  This guarantees the BP will be used to reference 
parameters and locals.
rdar://13218191

llvm-svn: 175576
2013-02-19 23:50:45 +00:00
Jim Grosbach
0d47c3335f ARM: Allocation hints must make sure to be in the alloc order.
When creating an allocation hint for a register pair, make sure the hint
for the physical register reference is still in the allocation order.

rdar://13240556

llvm-svn: 175541
2013-02-19 18:55:36 +00:00
Eli Bendersky
1523eabc7e Fix typo
llvm-svn: 175530
2013-02-19 17:11:48 +00:00
Benjamin Kramer
d0bfa4e8dc Fix GCMetadaPrinter::finishAssembly not executed, patch by Yiannis Tsiouris.
Due to the execution order of doFinalization functions, the GC information were
deleted before AsmPrinter::doFinalization was executed. Thus, the
GCMetadataPrinter::finishAssembly was never called.

The patch fixes that by moving the code of the GCInfoDeleter::doFinalization to
Printer::doFinalization.

llvm-svn: 175528
2013-02-19 16:51:44 +00:00
Arnold Schwaighofer
3a1cb40149 ARM NEON: Merge a f32 bitcast of a v2i32 extractelt
A vectorized sitfp on doubles will get scalarized to a sequence of an
extract_element of <2 x i32>, a bitcast to f32 and a sitofp.
Due to the the extract_element, and the bitcast we will uneccessarily generate
moves between scalar and vector registers.

The patch fixes this by using a COPY_TO_REGCLASS and a EXTRACT_SUBREG to extract
the element from the vector instead.

radar://13191881

llvm-svn: 175520
2013-02-19 15:27:05 +00:00
Reed Kotler
d849980705 Expand pseudos/macros BteqzT8SltiX16, BteqzT8SltiuX16,
BtnezT8SltiX16, BtnezT8SltiuX16 .

llvm-svn: 175486
2013-02-19 03:56:57 +00:00
Reed Kotler
7ddfd1de27 Expand pseudos BteqzT8CmpiX16 and BtnezT8CmpiX16.
llvm-svn: 175474
2013-02-19 00:20:58 +00:00
Chad Rosier
5babcb4a4b Comment out the rdar number.
llvm-svn: 175460
2013-02-18 21:59:15 +00:00
Chad Rosier
81ced58e28 [fast-isel] Remove an invalid assert.
If the memcpy has an odd length with an alignment of 2, this would incorrectly
assert on the last 1 byte copy.
rdar://13202135

llvm-svn: 175459
2013-02-18 21:46:28 +00:00