1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 02:52:53 +02:00
Commit Graph

129268 Commits

Author SHA1 Message Date
Teresa Johnson
a54f488a6d [ThinLTO] Serialize the Module SourceFileName to/from LLVM assembly
Summary:
This change serializes out and in the SourceFileName to LLVM assembly
so that it is preserved through "llvm-dis | llvm-as". This is
necessary to ensure that the global identifiers created for local values
in the module summary index are the same even if the bitcode is
streamed out and read back from LLVM assembly.

Serializing the summary itself to LLVM assembly is in progress.

Reviewers: joker.eph

Subscribers: llvm-commits, joker.eph

Differential Revision: http://reviews.llvm.org/D18588

llvm-svn: 264869
2016-03-30 14:00:02 +00:00
Simon Pilgrim
a48ee9f67e [X86][SSE] Test the legalization of vector comparison results
We are currently doing a REALLY bad job of packing results of vector comparisons into the legalized <X x i1> result equivalents - a mixture of PACKSS/PMOVMSKB would be much better here.

llvm-svn: 264867
2016-03-30 13:55:00 +00:00
Benjamin Kramer
ab82adaded [NVPTX] Avoid temporary std::string and make single-use function local to the cpp file.
No functionality change intended.

llvm-svn: 264861
2016-03-30 12:31:51 +00:00
Marianne Mailhot-Sarrasin
9201230000 gold-plugin: Fixed typo in an error message.
llvm-svn: 264860
2016-03-30 12:20:53 +00:00
Simon Pilgrim
85f1cb6896 [X86][SSE] Added tests for clearing upper bits of vector elements
Patterns based on PR6455

llvm-svn: 264857
2016-03-30 11:43:26 +00:00
James Molloy
7bcc6040d2 [VectorUtils] Don't try and truncate PHIs to a smaller bitwidth
We already try not to truncate PHIs in computeMinimalBitwidths. LoopVectorize can't handle it and we really don't need to, because both induction and reduction PHIs are truncated by other means.

However, we weren't bailing out in all the places we should have, and we ended up by returning a PHI to be truncated, which has caused PR27018.

This fixes PR17018.

llvm-svn: 264852
2016-03-30 10:11:43 +00:00
Chandler Carruth
0c67e1a699 [x86] Fix a horrible bug in our lowering of x86 floating point atomic
operations.

Specifically, we had code that tried to badly approximate reconstructing
all of the possible variations on addressing modes in two x86
instructions based on those in one pseudo instruction. This is not the
first bug uncovered with doing this, so stop doing it altogether.
Instead generically and pedantically copy every operand from the address
over to both new instructions, and strip kill flags from any register
operands.

This fixes a subtle bug seen in the wild where we would mysteriously
drop parts of the addressing mode, causing for example the index
argument in the added test case to just be completely ignored.

Hypothetically, this was an extremely bad miscompile because it actually
caused a predictable and leveragable write of a 64bit quantity to an
unintended offset (the first element of the array intead of whatever
other element was intended). As a consequence, in theory this could even
have introduced security vulnerabilities.

However, this was only something that could happen with an atomic
floating point add. No other operation could trigger this bug, so it
seems extremely unlikely to have occured widely in the wild.

But it did in fact occur, and frequently in scientific applications
which were using relaxed atomic updates of a floating point value after
adding a delta. Those would end up being quite badly miscompiled by
LLVM, which is how we found this. Of course, this often looks like
a race condition in the code, but it was actually a miscompile.

I suspect that this whole RELEASE_FADD thing was a complete mistake.
There is no such operation, and I worry that anything other than add
will get remarkably worse codegeneration. But that's not for this
change....

llvm-svn: 264845
2016-03-30 08:41:59 +00:00
Craig Topper
12775416e7 [CodeGen] Mark EVT:getExtendedSizeInBits() as LLVM_READONLY.
I think I had tried this a long time back and some bots failed. Hoping that was with an older gcc and maybe now it will work.

llvm-svn: 264840
2016-03-30 05:26:43 +00:00
Jingyue Wu
b932f8f175 [docs] Add gpucc publication and tutorial.
llvm-svn: 264839
2016-03-30 05:05:40 +00:00
Duncan P. N. Exon Smith
d644811b3a IR: Constify LLVMContext::discardValueNames, NFC
llvm-svn: 264823
2016-03-30 04:32:29 +00:00
Duncan P. N. Exon Smith
a4e51d3329 BitcodeReader: Fix weird whitespace, NFC
llvm-svn: 264822
2016-03-30 04:21:52 +00:00
George Burgess IV
5b40ed6e26 [MemorySSA] Make the visitor more careful with calls.
Prior to this patch, the MemorySSA caching visitor would cache all
calls that it visited. When paired with phi optimization, this can be
problematic. Consider:

define void @foo() {
  ; 1 = MemoryDef(liveOnEntry)
  call void @clobberFunction()
  br i1 undef, label %if.end, label %if.then

if.then:
  ; MemoryUse(??)
  call void @readOnlyFunction()
  ; 2 = MemoryDef(1)
  call void @clobberFunction()
  br label %if.end

if.end:
  ; 3 = MemoryPhi(...)
  ; MemoryUse(?)
  call void @readOnlyFunction()
  ret void
}

When optimizing MemoryUse(?), we visit defs 1 and 2, so we note to
cache them later. We ultimately end up not being able to optimize
passed the Phi, so we set MemoryUse(?) to point to the Phi. We then
cache the clobbering call for def 1 to be the Phi.

This commit changes this behavior so that we wipe out any calls
added to VisistedCalls while visiting the defs of a phi we couldn't
optimize.

Aside: With this patch, we now can bootstrap clang/LLVM without a
single MemorySSA verifier failure. Woohoo. :)

llvm-svn: 264820
2016-03-30 03:12:08 +00:00
Chandler Carruth
d7d8eed23b [x86] Extract a helper function to compute the full addressing mode from
an x86 MachineInstr's operands. This will be super useful to fix some
bad atomics code in my next commit.

No functionality changed.

llvm-svn: 264819
2016-03-30 03:10:24 +00:00
Xinliang David Li
bf7d981ba0 [PGO] Handle invoke inst in IR based icall instrumentation
Differential Revision: http://reviews.llvm.org/D18580

llvm-svn: 264818
2016-03-30 02:16:07 +00:00
George Burgess IV
48f0f2dd73 [MemorySSA] Change how the walker views/walks visited phis.
This patch teaches the caching MemorySSA walker a few things:

1. Not to walk Phis we've walked before. It seems that we tried to do
   this before, but it didn't work so well in cases like:

define void @foo() {
  %1 = alloca i8
  %2 = alloca i8
  br label %begin

begin:
  ; 3 = MemoryPhi({%0,liveOnEntry},{%end,2})
  ; 1 = MemoryDef(3)
  store i8 0, i8* %2
  br label %end

end:
  ; MemoryUse(?)
  load i8, i8* %1
  ; 2 = MemoryDef(1)
  store i8 0, i8* %2
  br label %begin
}

Because we wouldn't put Phis in Q.Visited until we tried to visit them.
So, when trying to optimize MemoryUse(?):
  - We would visit 3 above
    - ...Which would make us put {%0,liveOnEntry} in Q.Visited
    - ...Which would make us visit {%0,liveOnEntry}
    - ...Which would make us put {%end,2} in Q.Visited
    - ...Which would make us visit {%end,2}
      - ...Which would make us visit 3
        - ...Which would realize we've already visited everything in 3
        - ...Which would make us conservatively return 3.

In the added test-case, (@looped_visitedonlyonce) this behavior would
cause us to give incorrect results. Specifically, we'd visit 4 twice
in the same query, but on the second visit, we'd skip while.cond because
it had been visited, visit if.then/if.then2, and cache "1" as the
clobbering def on the way back.

2. If we try to walk the defs of a {Phi,MemLoc} and see it has been
   visited before, just hand back the Phi we're trying to optimize.

I promise this isn't as terrible as it seems. :)

We now insert {Phi,MemLoc} pairs just before walking the Phi's upward
defs. So, we check the cache for the {Phi,MemLoc} pair before checking
if we've already walked the Phi.

The {Phi,MemLoc} pair is (almost?) always guaranteed to have a cache
entry if we've already fully walked it, because we cache as we go.

So, if the {Phi,MemLoc} pair isn't in cache, either:
 (a) we must be in the process of visiting it (in which case, we can't
     give a better answer in a cache-as-we-go DFS walker)

 (b) we visited it, but didn't cache it on the way back (...which seems
     to require `ModifyingAccess` to not dominate `StartingAccess`,
     so I'm 99% sure that would be an error. If it's not an error, I
     haven't been able to get it to happen locally, so I suspect it's
     rare.)

- - - - -

As a consequence of this change, we no longer skip upward defs of phis,
so we can kill the `VisitedOnlyOne` check. This gives us better accuracy
than we had before, at the cost of potentially doing a bit more work
when we have a loop.

llvm-svn: 264814
2016-03-30 00:26:26 +00:00
Adam Nemet
a2c555a186 [Aarch64] Turn on the LoopDataPrefetch pass for Cyclone
llvm-svn: 264811
2016-03-30 00:21:29 +00:00
Adam Nemet
858430e4c1 [PPC] Remove -ppc-loop-prefetch-distance in favor of -prefetch-distance
After the previous change, this can now be overridden centrally in the
pass.

llvm-svn: 264807
2016-03-29 23:45:56 +00:00
Adam Nemet
352693be9b [LoopDataPrefetch] Centralize the tuning cl::opts under the pass
This is effectively NFC, minus the renaming of the options
(-cyclone-prefetch-distance -> -prefetch-distance).

The change was requested by Tim in D17943.

llvm-svn: 264806
2016-03-29 23:45:52 +00:00
Anna Zaks
83e4032a41 [tsan] Do not instrument reads/writes to instruction profile counters.
We have known races on profile counters, which can be reproduced by enabling
-fsanitize=thread and -fprofile-instr-generate simultaneously on a
multi-threaded program. This patch avoids reporting those races by not
instrumenting the reads and writes coming from the instruction profiler.

llvm-svn: 264805
2016-03-29 23:19:40 +00:00
Kostya Serebryany
a6e8668e11 [libFuzzer] more trophies
llvm-svn: 264804
2016-03-29 23:13:25 +00:00
Kostya Serebryany
c35be8422b [libFuzzer] more docs
llvm-svn: 264803
2016-03-29 23:07:36 +00:00
Duncan P. N. Exon Smith
f8e316d9f7 ADCE: Remove debug info intrinsics in dead scopes
During ADCE, track which debug info scopes still have live references
from the code, and delete debug info intrinsics for the dead ones.

These intrinsics describe the locations of variables (in registers or
stack slots).  If there's no code left corresponding to a variable's
scope, then there's no way to reference the variable in the debugger and
it doesn't matter what its value is.

I add a DEBUG printout when the described location in an SSA register,
in case it helps some trying to track down why locations get lost.
However, we still delete these; the scope itself isn't attached to any
real code, so the ship has already sailed.

llvm-svn: 264800
2016-03-29 22:57:12 +00:00
Fiona Glaser
b642370d36 MachineSink: make shouldSink a TII target hook
Some targets may disagree on what they want sunk or not sunk,
so make this a target hook instead of hardcoded.

llvm-svn: 264799
2016-03-29 22:44:57 +00:00
Adam Nemet
96b033381d [LoopDataPrefetch] Make more member functions private, NFC.
llvm-svn: 264798
2016-03-29 22:40:02 +00:00
Adrian Prantl
cbf8114bc1 Upgrade some wildly anachronistic debug info in testcases.
llvm-svn: 264797
2016-03-29 22:34:30 +00:00
Sanjay Patel
063e57f80f use FileCheck and auto-check-generation script for exact checking
1. Removed the run line for mingw32 and made the Darwin triples unknown.
   This is a test of 32-bit vs. 64-bit platform and the underlying hardware.
   We have other tests for checking behavioral differences of the OS platform.

2. Changed the CPU specifiers to the attributes they were meant to represent.
   Any CPU that doesn't have SSE4.2 is assumed to have slow unaligned 16-byte accesses,
   so it won't use those here.
 
3. Although the stores really could all be CHECK-DAG, I left them as CHECK-NEXT to
   show the strange behavior of the instruction scheduler in the SLOW_32 case.

4. The odd-looking instructions are due to the use of a null pointer in the IR, so
   we have integer immediate store addresses. Cute.

llvm-svn: 264796
2016-03-29 22:27:39 +00:00
Derek Schuff
f9d46e7d23 Add a print method to MachineFunctionProperties for better error messages
This makes check failures much easier to understand.
Make it empty (but leave it in the class) for NDEBUG builds.

Differential Revision: http://reviews.llvm.org/D18529

llvm-svn: 264780
2016-03-29 20:28:20 +00:00
Aaron Ballman
00d9bf7a62 Clarifying some of the requirements for building with Visual Studio on Windows. Namely, we require the latest Update to be installed (for sanity purposes), and we require CMake 2.8.12.2 for building LLVM with Visual Studio.
llvm-svn: 264779
2016-03-29 20:23:55 +00:00
Kevin Enderby
3a6cb0f262 Fix some bugs in the posix output of llvm-nm. Which is documented on
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/nm.html .

1) For Mach-O files the code was not printing the values in hex as is the default.
2) The values printed had leading zeros which they should not have.
3) The address for undefined symbols was printed as spaces instead of 0.
4) With the -A option with posix output for an archive did not use square
brackets around the archive member name.

rdar://25311883 and rdar://25299678

llvm-svn: 264778
2016-03-29 20:18:07 +00:00
James Y Knight
be5fed0669 [SPARC] Use AtomicExpandPass to expand AtomicRMW instructions.
They were previously expanded to CAS loops in a custom isel expansion,
but AtomicExpandPass knows how to do that generically.

Testing is covered by the existing sparc atomics.ll testcases.

llvm-svn: 264771
2016-03-29 19:09:54 +00:00
Matthias Braun
71b67e4c8e MachineVerifier: On dead-def live segments, check that corresponding machine operand has a dead flag
llvm-svn: 264769
2016-03-29 19:07:43 +00:00
Matthias Braun
972ce75cb7 LiveVariables: Fix typo and shorten comment
llvm-svn: 264768
2016-03-29 19:07:40 +00:00
Duncan P. N. Exon Smith
210e27d1ff IR: Add DbgInfoIntrinsic::getVariableLocation
Create a common accessor, DbgInfoIntrinsic::getVariableLocation, which
doesn't care about the type of debug info intrinsic.  Use this to
further unify the implementations of DbgDeclareInst::getAddress and
DbgValueInst::getValue.

Besides being a cleanup, I'm planning to use this to prepare DEBUG
output without having to branch on the concrete type.

llvm-svn: 264767
2016-03-29 18:56:03 +00:00
Ryan Govostes
721ddaf471 Revert "[asan] Make the global_metadata_darwin.ll test require El Capitan or newer"
llvm-svn: 264764
2016-03-29 18:27:24 +00:00
Teresa Johnson
fe0dbfc992 [ThinLTO] Remove post-pass metadata linking support
Since we have moved to a model where functions are imported in bulk from
each source module after making summary-based importing decisions, there
is no longer a need to link metadata as a postpass, and all users have
been removed.

This essentially reverts r255909 and follow-on fixes.

llvm-svn: 264763
2016-03-29 18:24:19 +00:00
Ryan Govostes
8dc6db89b3 [asan] Make the global_metadata_darwin.ll test require El Capitan or newer
llvm-svn: 264758
2016-03-29 17:58:49 +00:00
Nirav Dave
928c803594 Add support for no-jump-tables
Add function soft attribute to the generation of Jump Tables in CodeGen
as initial step towards clang support of gcc's no-jump-table support

Reviewers: hans, echristo

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D18321

llvm-svn: 264756
2016-03-29 17:46:23 +00:00
Derek Schuff
b44b894dbb Add MachineVerifier check for AllVRegsAllocated MachineFunctionProperty
Summary:
Check that any function that has the property set is free of virtual
register operands.

Also, it is actually VirtRegMap (and not the register allocators) that
acutally remove the VReg operands (except for RegAllocFast).

Reviewers: qcolombet

Subscribers: MatzeB, llvm-commits, qcolombet

Differential Revision: http://reviews.llvm.org/D18535

llvm-svn: 264755
2016-03-29 17:40:22 +00:00
Manman Ren
620c905661 Swift Calling Convention: add swiftself attribute.
Differential Revision: http://reviews.llvm.org/D17866

llvm-svn: 264754
2016-03-29 17:37:21 +00:00
Sanjay Patel
555db84966 [x86] add tests to show current memset codegen
llvm-svn: 264748
2016-03-29 17:09:27 +00:00
Sanjoy Das
dd63960ae0 [SCEV] Extract out a MatchBinaryOp; NFCI
MatchBinaryOp abstracts out the IR instructions from the operations they
represent.  While this change is NFC, we will use this factoring later
to map things like `(extractvalue 0 (sadd.with.overflow X Y))` to `(add
X Y)`.

llvm-svn: 264747
2016-03-29 16:40:44 +00:00
Sanjoy Das
a2d0caed2b [SCEV] Use Operator::getOpcode instead of manual dispatch; NFC
llvm-svn: 264746
2016-03-29 16:40:39 +00:00
Justin Lebar
d7fed95411 Make InlineSimple's one-arg constructor explicit. NFC
llvm-svn: 264744
2016-03-29 16:26:06 +00:00
Justin Lebar
6cbb306baf Reformat a comment in InlineSimple.cpp. NFC
llvm-svn: 264743
2016-03-29 16:26:03 +00:00
Sanjay Patel
460bea3fa6 regenerate checks
llvm-svn: 264738
2016-03-29 16:11:29 +00:00
Konstantin Zhuravlyov
1612e5731d Test commit access
llvm-svn: 264736
2016-03-29 15:15:44 +00:00
Teresa Johnson
00e14ae055 [ThinLTO] Use new GlobalValue::getGUID helper (NFC)
This was already being used for functions and aliases, was missed when
handling global variables.

llvm-svn: 264734
2016-03-29 14:49:26 +00:00
Hemant Kulkarni
1e7498594c [llvm-readobj] NFC: Remove unneeded parenthesis
llvm-svn: 264731
2016-03-29 14:20:20 +00:00
Simon Dardis
3f0810342a [mips] Test commit: Mark insertNoop as dead code (NFC)
llvm-svn: 264728
2016-03-29 13:02:19 +00:00
Daniel Sanders
0d769907b4 [mips] Correct MIPS16 jal/jalx to have uimm26 offsets and add MC layer range checks. NFC.
Summary:
However, this has no effect at this time because the instructions affected
are marked 'isCodeGenOnly=1' and have no alternative for the MC layer.

Reviewers: vkalintiris

Subscribers: llvm-commits, dsanders

Differential Revision: http://reviews.llvm.org/D18179

llvm-svn: 264712
2016-03-29 09:40:38 +00:00