1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-27 05:53:07 +01:00
Commit Graph

36772 Commits

Author SHA1 Message Date
Reid Kleckner
607aa67b5e [codeview] Use comdats for debug info describing comdat functions
Summary:
This allows the linker to discard unused symbol information for comdat
functions that were discarded during the link. Before this change,
searching for the name of an inline function in the debugger would
return multiple results, one per symbol subsection in the object file.
After this change, there is only one result, the result for the function
chosen by the linker.

Reviewers: zturner, majnemer

Subscribers: aaboud, amccarth, llvm-commits

Differential Revision: http://reviews.llvm.org/D20642

llvm-svn: 270792
2016-05-25 23:16:12 +00:00
Manman Ren
0877622c14 Objective-C Class Properties: Autoupgrade "Class Properties" module flag.
When we have "Image Info Version" module flag but don't have "Class Properties"
module flag, set "Class Properties" module flag to 0, so we can correctly emit
errors when one module has the flag set and another module does not.

rdar://26469641

llvm-svn: 270791
2016-05-25 23:14:48 +00:00
Petr Hosek
695c00d618 [MC] Support symbolic expressions in assembly directives
This matches the behavior of GNU assembler which supports symbolic
expressions in absolute expressions used in assembly directives.

Differential Revision: http://reviews.llvm.org/D20337

llvm-svn: 270786
2016-05-25 22:47:51 +00:00
Michael Kuperstein
f4506f07fc [BasicAA] Improve precision of alloca vs. inbounds GEP alias queries
If a we have (a) a GEP and (b) a pointer based on an alloca, and the
beginning of the object the GEP points would have a negative offset with
repsect to the alloca, then the GEP can not alias pointer (b).

For example, consider code like:

struct { int f0, int f1, ...} foo;
...
foo alloca;
foo *random = bar(alloca);
int *f0 = &alloca.f0
int *f1 = &random->f1;

Which is lowered, approximately, to:
%alloca = alloca %struct.foo
%random = call %struct.foo* @random(%struct.foo* %alloca)
%f0 = getelementptr inbounds %struct, %struct.foo* %alloca, i32 0, i32 0
%f1 = getelementptr inbounds %struct, %struct.foo* %random, i32 0, i32 1

Assume %f1 and %f0 alias. Then %f1 would point into the object allocated
by %alloca. Since the %f1 GEP is inbounds, that means %random must also
point into the same object. But since %f0 points to the beginning of %alloca,
the highest %f1 can be is (%alloca + 3). This means %random can not be higher
than (%alloca - 1), and so is not inbounds, a contradiction.

Differential Revision: http://reviews.llvm.org/D20495

llvm-svn: 270777
2016-05-25 22:23:08 +00:00
Adrian Prantl
867f677e9a PR26055: Speed up LiveDebugValues by replacing lists with bitvectors.
This patch modifies the LiveDebugValues pass to use more efficient set
data structures as outlined in PR26055. Both VarLocSet and VarLocList are
now SparseBitVectors which allows us to perform much faster bitvector
arithmetic on them.

The speedup can be in the order of minutes especially on ASANified code.

The change is not NFC in the assembler output because the inserted
DBG_VALUEs are now sorted by variable and location.

Many thanks to Daniel Berlin for helping design the improved algorithm and
reviewing the patch.

https://llvm.org/bugs/show_bug.cgi?id=26055
http://reviews.llvm.org/D20178
rdar://problem/24091200

llvm-svn: 270776
2016-05-25 22:21:12 +00:00
Hal Finkel
e4580bceba Look for a loop's starting location in the llvm.loop metadata
Getting accurate locations for loops is important, because those locations are
used by the frontend to generate optimization remarks. Currently, optimization
remarks for loops often appear on the wrong line, often the first line of the
loop body instead of the loop itself. This is confusing because that line might
itself be another loop, or might be somewhere else completely if the body was
inlined function call. This happens because of the way we find the loop's
starting location. First, we look for a preheader, and if we find one, and its
terminator has a debug location, then we use that. Otherwise, we look for a
location on an instruction in the loop header.

The fallback heuristic is not bad, but will almost always find the beginning of
the body, and not the loop statement itself. The preheader location search
often fails because there's often not a preheader, and even when there is a
preheader, depending on how it was formed, it sometimes carries the location of
some preceeding code.

I don't see any good theoretical way to fix this problem. On the other hand,
this seems like a straightforward solution: Put the debug location in the
loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert
llvm.loop metadata with appropriate locations when generating debugging
information. With these changes, our loop remarks have much more accurate
locations.

Differential Revision: http://reviews.llvm.org/D19738

llvm-svn: 270771
2016-05-25 21:42:37 +00:00
Simon Pilgrim
e9a179168b [X86][SSE41] Removed pblendw intrinsics tests - they are auto-upgraded
Equivalent tests included in sse41-intrinsics-x86-upgrade.ll - the i8/i32 immediate diff doesn't matter anymore

llvm-svn: 270767
2016-05-25 21:27:58 +00:00
Simon Pilgrim
0da4a2c2e6 [X86][SSE41] Regenerated intrinsics tests
llvm-svn: 270764
2016-05-25 21:21:51 +00:00
Ahmed Bougacha
b3c9ba99bf [TLI] Also cover Linux 64 libfunc (stat64, ...) prototype checking.
My script missed those in r270750.

llvm-svn: 270763
2016-05-25 21:16:33 +00:00
Simon Pilgrim
5142359484 [X86][SSE41] Removed blendpd/blendps intrinsics tests - they are auto-upgraded
Equivalent tests included in sse41-intrinsics-x86-upgrade.ll

llvm-svn: 270761
2016-05-25 21:06:36 +00:00
Mehdi Amini
638f8c1e34 IRLinker: fix double scheduling of mapping a global value because of an alias
This test was hitting an assertion in the value mapper because
the IRLinker was trying to map two times @A while materializing
the initializer for @C.

Fix http://llvm.org/PR27850

Differential Revision: http://reviews.llvm.org/D20586

llvm-svn: 270757
2016-05-25 21:00:44 +00:00
Simon Pilgrim
0792d32b20 [X86][AVX2] Regenerate avx2 vector shift tests
llvm-svn: 270756
2016-05-25 21:00:40 +00:00
Ahmed Bougacha
15451fb9fc [TLI] Fix NumParams==0 prototype checking typo.
There was a typo in r267758. It caused invalid accesses when
given something like "void @free(...)", as NumParams == 0, and
we then try to look at the 0th parameter.

Turns out, most of these were untested; add both attribute
and missing-prototype checks for all libc libfuncs.

Differential Revision: http://reviews.llvm.org/D20543

llvm-svn: 270750
2016-05-25 20:22:45 +00:00
Rafael Espindola
2224908028 Fix shouldAssumeDSOLocal for private linkage.
llvm-svn: 270746
2016-05-25 19:55:16 +00:00
Reid Kleckner
31bb5b2278 [IR] Copy comdats in GlobalObject::copyAttributesFrom
This is probably correct for all uses except cross-module IR linking,
where we need to move the comdat from the source module to the
destination module.

Fixes PR27870.

Reviewers: majnemer

Differential Revision: http://reviews.llvm.org/D20631

llvm-svn: 270743
2016-05-25 18:36:22 +00:00
Matt Arsenault
1f6cee6a4f AMDGPU: Fix v2i64/v2f64 bitcasts
These operations tend to get promoted away to v4i32 so
this doesn't happen often.

llvm-svn: 270740
2016-05-25 18:07:36 +00:00
Matt Arsenault
ea952af911 AMDGPU: Fix missing br_cc i1 test coverage
Also un xfail a test.

llvm-svn: 270739
2016-05-25 17:58:27 +00:00
Chad Rosier
9f1f557fac [SelectionDAG] Add smarts for BSWAP in computeKnownBits.
llvm-svn: 270738
2016-05-25 17:52:38 +00:00
Matt Arsenault
d17f030a97 AMDGPU: Make vectorization defeating test changes
Simplifies test updates in the future.

llvm-svn: 270736
2016-05-25 17:42:39 +00:00
Matt Arsenault
e21e61958d AMDGPU: Fix inconsistent lowering of select of vectors
f32 vectors would use a sequence of BFI instructions instead
of unrolled cmp + select. This was better in the case of a VALU
select with SGPR inputs, but we don't have a way of dealing with that
in the DAG.

llvm-svn: 270731
2016-05-25 17:34:58 +00:00
Sanjay Patel
e582594538 [x86] avoid code explosion from LoopVectorizer for gather loop (PR27826)
By making pointer extraction from a vector more expensive in the cost model,
we avoid the vectorization of a loop that is very likely to be memory-bound:
https://llvm.org/bugs/show_bug.cgi?id=27826

There are still bugs related to this, so we may need a more general solution
to avoid vectorizing obviously memory-bound loops when we don't have HW gather
support.

Differential Revision: http://reviews.llvm.org/D20601

llvm-svn: 270729
2016-05-25 17:27:54 +00:00
Chris Bieneman
d8d2780ae0 [obj2yaml] [yaml2obj] MachO support for rebase opcodes
This is the first bit of support for MachO __LINKEDIT segment data.

llvm-svn: 270724
2016-05-25 17:09:07 +00:00
Tim Shen
198e5cb8a0 Move and add comments to the top for tailcall-string-rvo.ll
Differential Revision: http://reviews.llvm.org/D20311

llvm-svn: 270722
2016-05-25 17:01:09 +00:00
Hal Finkel
c1f6823ee0 [SDAG] Add a fallback multiplication expansion
LegalizeIntegerTypes does not have a way to expand multiplications for large
integer types (i.e. larger than twice the native bit width). There's no
standard runtime call to use in that case, and so we'd just assert.

Unfortunately, as it turns out, it is possible to hit this case from
standard-ish C code in rare cases. A particular case a user ran into yesterday
involved an __int128 induction variable and a loop with a quadratic (not
linear) recurrence which triggered some backend logic using SCEVExpander. In
this case, the BinomialCoefficient code in SCEV generates some i129 variables,
which get widened to i256. At a high level, this is not actually good (i.e. the
underlying optimization, PPCLoopPreIncPrep, should not be transforming the loop
in question for performance reasons), but regardless, the backend shouldn't
crash because of cost-modeling issues in the optimizer.

This is a straightforward implementation of the multiplication expansion, based
on the algorithm in Hacker's Delight. I validated it against the code for the
mul256b function from http://locklessinc.com/articles/256bit_arithmetic/ using
random inputs. There should be no functional change for previously-working code
(the new expansion code only replaces an assert).

Fixes PR19797.

llvm-svn: 270720
2016-05-25 16:50:22 +00:00
Teresa Johnson
de4624ad69 [ThinLTO] Fix test check prefix so that intended prefix tested
There aren't any checks with prefix PROMOTE, should be PROMOTE_MOD1
which wasn't being tested (but works as expected).

llvm-svn: 270719
2016-05-25 16:45:08 +00:00
Sanjay Patel
289425eb9f [x86, AVX] allow explicit calls to VZERO* to modify state in VZeroUpperInserter pass (PR27823)
As noted in the review, there are still problems, so this doesn't the bug completely.

Differential Revision: http://reviews.llvm.org/D20529

llvm-svn: 270718
2016-05-25 16:39:47 +00:00
Simon Pilgrim
a16bac494b [X86][AVX] Sync with clang/test/CodeGen/avx2-builtins.c
Only tests for the gather intrinsic are still to be added

llvm-svn: 270710
2016-05-25 15:30:08 +00:00
Oleg Ranevskyy
34bf60ca68 [SCEV] No-wrap flags are not propagated when folding "{S,+,X}+T ==> {S+T,+,X}"
Summary:
**Description**

This makes `WidenIV::widenIVUse` (IndVarSimplify.cpp) fail to widen narrow IV uses in some cases. The latter affects IndVarSimplify which may not eliminate narrow IV's when there actually exists such a possibility, thereby producing ineffective code.

When `WidenIV::widenIVUse` gets a NarrowUse such as `{(-2 + %inc.lcssa),+,1}<nsw><%for.body3>`, it first tries to get a wide recurrence for it via the `getWideRecurrence` call.
`getWideRecurrence` returns recurrence like this: `{(sext i32 (-2 + %inc.lcssa) to i64),+,1}<nsw><%for.body3>`.

Then a wide use operation is generated by `cloneIVUser`. The generated wide use is evaluated to `{(-2 + (sext i32 %inc.lcssa to i64))<nsw>,+,1}<nsw><%for.body3>`, which is different from the `getWideRecurrence` result. `cloneIVUser` sees the difference and returns nullptr.

This patch also fixes the broken LLVM tests by adding missing <nsw> entries introduced by the correction.

**Minimal reproducer:**
```
int foo(int a, int b, int c);
int baz();

void bar()
{
   int arr[20];
   int i = 0;

   for (i = 0; i < 4; ++i)
     arr[i] = baz();

   for (; i < 20; ++i)
     arr[i] = foo(arr[i - 4], arr[i - 3], arr[i - 2]);
}
```

**Clang command line:**
```
clang++ -mllvm -debug -S -emit-llvm -O3 --target=aarch64-linux-elf test.cpp -o test.ir
```

**Expected result:**
The ` -mllvm -debug` log shows that all the IV's for the second `for` loop have been eliminated.

Reviewers: sanjoy

Subscribers: atrick, asl, aemerson, mzolotukhin, llvm-commits

Differential Revision: http://reviews.llvm.org/D20058

llvm-svn: 270695
2016-05-25 13:01:33 +00:00
Simon Pilgrim
e806c3471c [X86][AVX2] Added more fast-isel tests to match clang/test/CodeGen/avx2-builtins.c
llvm-svn: 270685
2016-05-25 10:56:23 +00:00
Simon Pilgrim
eb7d07a957 [X86][AVX2] Begun adding fast-isel tests to match clang/test/CodeGen/avx2-builtins.c
llvm-svn: 270683
2016-05-25 10:15:06 +00:00
Simon Pilgrim
b4a440d8b8 [X86][SSE2] Use storeu intrinsics for _mm_storeu_pd/_mm_storeu_pd tests
Also fixed name of _mm_store1_pd test

llvm-svn: 270681
2016-05-25 09:42:29 +00:00
Simon Pilgrim
4950d6bb8d [X86][SSE] Use storeu intrinsics for _mm_storeu_ps test
llvm-svn: 270680
2016-05-25 09:28:06 +00:00
Simon Pilgrim
1a1ddc32da [X86][SSE] Replace (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) lossless conversion intrinsics with generic IR
Followup to D20528 clang patch, this removes the (V)CVTDQ2PD(Y) and (V)CVTPS2PD(Y) llvm intrinsics and auto-upgrades to sitofp/fpext instead.

Differential Revision: http://reviews.llvm.org/D20568

llvm-svn: 270678
2016-05-25 08:59:18 +00:00
Craig Topper
4710ab1424 [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a long time.
llvm-svn: 270677
2016-05-25 06:56:32 +00:00
David Majnemer
3c41824d16 [FunctionAttrs] Volatile loads should disable readonly
A volatile load has side effects beyond what callers expect readonly to
signify.  For example, it is not safe to reorder two function calls
which each perform a volatile load to the same memory location.

llvm-svn: 270671
2016-05-25 05:53:04 +00:00
Zachary Turner
d50b930aa6 [llvm-pdbdump] Decipher the remaining PDB streams.
We know at least know the meaning of every stream of the
PDB file.  Yay!

llvm-svn: 270669
2016-05-25 05:49:48 +00:00
Saleem Abdulrasool
f7d75a7227 Revert "llvm-objdump: support dumping AUX records for weak externals"
Revert it until we can figure out the endianness issue.

llvm-svn: 270667
2016-05-25 05:45:02 +00:00
Zachary Turner
7e718f26a3 [llvm-pdbdump] Dump the IPI stream and all records.
llvm-svn: 270661
2016-05-25 04:35:22 +00:00
Rui Ueyama
9930c53d56 pdbdump: fix bug in name hash table.
name_ids() did not return all IDs but only the first NameCount items.
The number of non-zero entries in IDs vector is NameCount, but it
does not mean that all non-zero entries are at the beginning of IDs
vector.

Differential Revision: http://reviews.llvm.org/D20611

llvm-svn: 270656
2016-05-25 04:07:17 +00:00
Zachary Turner
2984a1ea1d [llvm-pdbdump] Stream 0 isn't actually the MSF superblock.
Oddly enough, I realized we don't actually know what stream
0 is (if anything).

llvm-svn: 270655
2016-05-25 03:53:16 +00:00
Saleem Abdulrasool
144a2028c6 test: use a binary file instead
Generate the obj rather than use yaml2obj.  Hopefully, this fixes the PPC64 test
failures.

llvm-svn: 270654
2016-05-25 03:48:07 +00:00
Zachary Turner
fff89b5e93 [llvm-pdbdump] Dump stream summary list.
Try to figure out what each stream is, and dump its name.

This gives us a better picture of what streams we still don't
understand.

llvm-svn: 270653
2016-05-25 03:43:17 +00:00
Saleem Abdulrasool
73cac6f912 llvm-objdump: support dumping AUX records for weak externals
This is a support COFF feature.  Ensure that we can display the weak externals
auxiliary symbol.  It contains useful information (such as the default binding
and how to resolve the symbol).

llvm-svn: 270648
2016-05-25 01:59:32 +00:00
Davide Italiano
c01e301072 [PM] Port BDCE to the new pass manager.
llvm-svn: 270647
2016-05-25 01:57:04 +00:00
Derek Bruening
605cdbc11c [esan|wset] EfficiencySanitizer working set tool fastpath
Summary:
Adds fastpath instrumentation for esan's working set tool.  The
instrumentation for an intra-cache-line load or store consists of an
inlined write to shadow memory bits for the corresponding cache line.

Adds a basic test for this instrumentation.

Reviewers: aizatsky

Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits

Differential Revision: http://reviews.llvm.org/D20483

llvm-svn: 270640
2016-05-25 00:17:24 +00:00
Richard Smith
9b2e9d0752 Revert r270569 (teach llvm-mc to generate compressed debug sections in zlib
style). It appears that current ELF linkers are not ready for this.

llvm-svn: 270638
2016-05-25 00:14:12 +00:00
Dan Gohman
e46ddfaa34 [WebAssembly] Put __stack_pointer in the offset field of loads and stores.
Instead of this:

i32.const       $push10=, __stack_pointer
i32.load        $push11=, 0($pop10)

Emit this:

i32.const       $push10=, 0
i32.load        $push11=, __stack_pointer($pop10)

It's not currently clear which is better, though there's a chance the second
form may be better at overall compression. We can revisit this when we have
more data; for now it makes sense to make PEI consistent with isel.

Differential Revision: http://reviews.llvm.org/D20411

llvm-svn: 270635
2016-05-24 23:47:41 +00:00
Michael Zolotukhin
82048571c5 Re-enable "[LoopUnroll] Enable advanced unrolling analysis by default" one more time.
This reverts commit r270577.

llvm-svn: 270630
2016-05-24 23:00:05 +00:00
Michael Zolotukhin
7bf254c21f [LoopUnrollAnalyzer] Fix a crash in UnrolledInstAnalyzer::visitCastInst.
This fixes PR27847. Now for real.

llvm-svn: 270629
2016-05-24 22:59:58 +00:00
Zachary Turner
4aa4d6e21a [codeview] Add support for new type records.
This adds support for parsing and dumping the following
symbol types:

S_LPROCREF
S_ENVBLOCK
S_COMPILE2
S_REGISTER
S_COFFGROUP
S_SECTION
S_THUNK32
S_TRAMPOLINE

As of this patch, the test PDB files no longer have any unknown
symbol types.

llvm-svn: 270628
2016-05-24 22:58:46 +00:00