1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-21 18:22:53 +01:00
Commit Graph

192420 Commits

Author SHA1 Message Date
Nekotekina
bb9faf3c8b Update azure-pipelines.yml
Enable LLVM_USE_INTEL_JITEVENTS
2020-05-26 22:18:55 +03:00
JohnHolmesII
69cb3f3760
CI: Unbreak packaging on WIndows (#2)
* Revert "CI: Emit sha256 sums for Windows"

This reverts commit 16c3a8e733.

* CI: Emit Windows sha without breaking things
2020-04-12 23:45:00 +03:00
JohnHolmesII
16c3a8e733 CI: Emit sha256 sums for Windows 2020-04-12 14:06:12 +03:00
Nekotekina
f5679565d3 Azure: update Win32 build and build llvmlibs_mt.7z separately 2020-03-26 11:31:49 +03:00
Nekotekina
aa6a55e0a0 Azure: remove LLVM_TEMPORARILY_ALLOW_OLD_TOOLCHAIN 2020-03-21 11:54:07 +03:00
Nekotekina
752ea19eff Azure: fix releasing 2020-03-03 12:51:52 +03:00
Nekotekina
0c0b09edb7 Azure: workaround MSVC build (-DLLVM_TEMPORARILY_ALLOW_OLD_TOOLCHAIN=ON) 2020-03-03 11:58:59 +03:00
Nekotekina
d2afdbcc4a DenseMap: add workaround for C++2a builds
Hide operator !=
2020-03-03 11:58:59 +03:00
Nekotekina
fdfa1ef6cd RuntimeDyld: workaround use-after-free bug (Sections)
Sections is SmallVector, and it grows sometimes, causing use-after-free bug somewhere.
2020-03-03 11:58:59 +03:00
Nekotekina
7bba893902 Disable GDBRegistrationListener
It makes emitting object extremely slow.
GDB doesn't work properly with it anyway.
GDB also often crashes because it cannot read the format.
2020-03-03 11:58:59 +03:00
Nekotekina
acc04ce154 X86: add RTM to Haswell+ features 2020-03-03 11:58:59 +03:00
Nekotekina
ecdf4ca664 X86: avoid vector-scalar shifts if splat amount is directly a vector ADD/SUB/AND op.
Prefer vector-vector shifts if available (AVX2+).
Improves code generated for rotate and funnel shifts.
Otherwise it would generate a shuffle + slower vector-scalar shift.
2020-03-03 11:58:59 +03:00
Nekotekina
c588b7cae7 MCJIT: don't finalize modules on symbol lookup (workaround)
This is extremely slow yet unnecessary with manual finalization.
In LLVM 6 this wasn't a problem.
2020-03-03 11:58:59 +03:00
Nekotekina
c6a4047b8b X86: add patterns for X86ISD::VSHLV and X86ISD::VSRLV
Replace VSELECT instruction which zeroes their result on exceeding legal SHL/SRL shift amount.
2020-03-03 11:58:59 +03:00
Nekotekina
99317acfdd X86: add pattern for X86ISD::VSRAV
Detect clamping ashr shift amount to max legal value
2020-03-03 11:58:59 +03:00
Nekotekina
c593e2168a X86: expand detectAVGPattern()
Allow all integer widths in the pattern, allow ashr
Handle signed and mixed cases, allowing to replace truncation
2020-03-03 11:58:59 +03:00
Nekotekina
b51b3721c1 X86: optimize VSELECT for v16i8 with shl + sign bit test 2020-03-03 11:58:59 +03:00
Nekotekina
6e3871c033 X86: LowerShift: new algorithm for vector-vector shifts
Emit pair of shifts of double size if possible
2020-03-03 10:50:32 +03:00
Nekotekina
80d76b612e X86: Fix/workaround Small Code Model for JIT
Force RIP-relative jump tables and global values
Force RIP-relative all zeros / all ones constants
These things were causing crashes due to use of absolute addressing
2020-03-03 10:50:32 +03:00
Ivan
f286257383 Set up CI with Azure Pipelines 2020-03-03 10:50:32 +03:00
Kirill Bobyrev
0fc5f58b28 Use temporary directory for tests in D74346 2020-02-24 12:19:07 +01:00
Benjamin Kramer
707b7beede [ORC] Remove spammy debug print 2020-02-24 12:10:13 +01:00
Kerry McLaughlin
96fc6c2abc [AArch64][SVE] Add intrinsics for SVE2 cryptographic instructions
Summary:
Implements the following SVE2 intrinsics:
 - @llvm.aarch64.sve.aesd
 - @llvm.aarch64.sve.aesimc
 - @llvm.aarch64.sve.aese
 - @llvm.aarch64.sve.aesmc
 - @llvm.aarch64.sve.rax1
 - @llvm.aarch64.sve.sm4e
 - @llvm.aarch64.sve.sm4ekey

Reviewers: sdesmalen, c-rhodes, dancgr, cameron.mcinally, efriedma, rengolin

Reviewed By: sdesmalen

Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74833
2020-02-24 10:49:31 +00:00
Bevin Hansson
4f8b0d2f56 [Intrinsic] Add fixed point saturating division intrinsics.
Summary:
This patch adds intrinsics and ISelDAG nodes for signed
and unsigned fixed-point division:

```
llvm.sdiv.fix.sat.*
llvm.udiv.fix.sat.*
```

These intrinsics perform scaled, saturating division
on two integers or vectors of integers. They are
required for the implementation of the Embedded-C
fixed-point arithmetic in Clang.

Reviewers: bjope, leonardchan, craig.topper

Subscribers: hiraditya, jdoerfert, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71550
2020-02-24 10:50:52 +01:00
Calixte Denizet
27f50511fb [profile] Don't dump counters when forking and don't reset when calling exec** functions
Summary:
There is no need to write out gcdas when forking because we can just reset the counters in the parent process.
Let say a counter is N before the fork, then fork and this counter is set to 0 in the child process.
In the parent process, the counter is incremented by P and in the child process it's incremented by C.
When dump is ran at exit, parent process will dump N+P for the given counter and the child process will dump 0+C, so when the gcdas are merged the resulting counter will be N+P+C.
About exec** functions, since the current process is replaced by an another one there is no need to reset the counters but just write out the gcdas since the counters are definitely lost.
To avoid to have lists in a bad state, we just lock them during the fork and the flush (if called explicitely) and lock them when an element is added.

Reviewers: marco-c

Reviewed By: marco-c

Subscribers: hiraditya, cfe-commits, #sanitizers, llvm-commits, sylvestre.ledru

Tags: #clang, #sanitizers, #llvm

Differential Revision: https://reviews.llvm.org/D74953
2020-02-24 10:38:33 +01:00
Pavel Labath
f744a9cab7 Use new FailedWithMessage matcher in DWARFDebugLineTest.cpp
Summary:
This should produce slightly better error messages in case of failures.
Only slightly, because this code was pretty careful about that to begin
with -- I've seen code which does much worse.

Reviewers: jhenderson, dblaikie

Subscribers: llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D74899
2020-02-24 10:27:00 +01:00
Bevin Hansson
d148a7c68f [MC] Widen the functional unit type from 32 to 64 bits.
Summary:
The type used to represent functional units in MC is
'unsigned', which is 32 bits wide. This is currently
not a problem in any upstream target as no one seems
to have hit the limit on this yet, but in our
downstream one, we need to define more than 32
functional units.

Increasing the size does not seem to cause a huge
size increase in the binary (an llc debug build went
from 1366497672 to 1366523984, a difference of 26k),
so perhaps it would be acceptable to have this patch
applied upstream as well.

Subscribers: hiraditya, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D71210
2020-02-24 09:37:00 +01:00
Sam Parker
9d9c7e75ed [ARM][MVE] Combine more extending masked loads
For MVE, don't look at the users of the extending loads so that more
as desirable for folding.

Differential Revision: https://reviews.llvm.org/D74958
2020-02-24 07:50:15 +00:00
Lang Hames
7247401c11 [JITLink] Add a MachO x86-64 GOT and Stub bypass optimization.
This optimization bypasses GOT loads and calls/branches through stubs when the
ultimate target of the access/branch is found to be within range of the
reference.

Extra debugging output is also added to the generic JITLink algorithm and
basic GOT and Stubs builder utility to aid debugging.
2020-02-23 23:38:31 -08:00
Craig Topper
0f08ad8e75 [X86] When creating X86ISD::MGATHER nodes from AVX2 gather intrinsics, cast the mask to integer type.
The gather intrinsics use a floating point mask when the result
type is FP. But we call DemandedBits on the mask assuming its an
integer type. We also use integer types when we create it from
generic IR. So add a bitcast to the intrinsic path to guarantee
the integer type.
2020-02-23 23:00:41 -08:00
Craig Topper
6d78a8f8e7 [X86] Use custom isel for gather/scatter instructions.
The type profile we use for the isel patterns lied about how
many operands the gather/scatter node has to skip the index
and scale operands. This allowed us to expand the baseptr
operand into base, displacement, and segment and then merge
the index and scale with them in the final instruction during
isel. This is kind of a hack that relies on isel not checking the
number of operands at all.

This commit switches to custom isel where we can manage this
directly without relying on holes in the isel checking.
2020-02-23 22:33:06 -08:00
Craig Topper
7f1c4b0147 [SelectionDAG] Remove ISD::LIFETIME_START/LIFETIME_END from assert in getMemIntrinsicNode.
These appear to have their own SDNode type and shouldn't use
MemIntrinsicSDNode.
2020-02-23 22:32:36 -08:00
QingShan Zhang
c8e3ab017c [NFC][PowerPC] Refactor the tryAndWithMask()
Split the tryAndWithMask into several small calls.

Differential Revision: https://reviews.llvm.org/D72250
2020-02-24 04:02:24 +00:00
Hongtao Yu
49f50c7626 IR printing for single function with the new pass manager.
Summary:
The IR printing always prints out all functions in a module with the new pass manager, even with -filter-print-funcs specified. This is being fixed in this change. However, there are two exceptions, i.e, with user-specified wildcast switch -filter-print-funcs=* or -print-module-scope, under which IR of all functions should be printed.

Test Plan:
make check-clang
make check-llvm

Reviewers: wenlei

Reviewed By: wenlei

Subscribers: wenlei, hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D74814
2020-02-23 15:28:57 -08:00
Craig Topper
8e843e8cc6 [SelectionDAG] Remove SelectionDAG::getTargetMemSDNode now that its not used.
Targets are expected to use getMemIntrinsicNode and not provide
their own subclasses. X86 was previously the only user.
2020-02-23 15:13:50 -08:00
Craig Topper
6eaf3a379d [X86] Remove most X86 specific subclasses of MemSDNode. Just use a MemIntrinsicSDNode as we usually do.
Leave the gather/scatter subclasses, but make them inherit from
MemIntrinsicSDNode and delete their constructor and destructor.
This way we can still have the getIndex, getMask, etc. convenience
functions.
2020-02-23 15:13:32 -08:00
Craig Topper
28ac0f3baa [X86] Enable the use of movlps for i64 atomic load on 32-bit targets with sse1.
Still a little room for improvement by using movlps to store to
the stack temporary needed to move data out of the xmm register
after the load.
2020-02-23 15:11:38 -08:00
Craig Topper
48e959027f [X86] Use FIST for i64 atomic stores on 32-bit targets without SSE. 2020-02-23 15:11:38 -08:00
Jonas Paulsson
b70c140e59 [SystemZ] Support the kernel back chain.
In order to build the Linux kernel, the back chain must be supported with
packed-stack. The back chain is then stored topmost in the register save
area.

Review: Ulrich Weigand

Differential Revision: https://reviews.llvm.org/D74506
2020-02-23 13:42:36 -08:00
Florian Hahn
2a8d2e9744 [AArch64] Update new test.
Changed after 7769030b9310c1865fd331edb78dc242a39b109a.
2020-02-23 19:13:13 +00:00
Florian Hahn
5f8cf84ae0 Recommit "[PatternMatch] Match XOR variant of unsigned-add overflow check."
This version fixes a buildbot failure cause by picking the wrong insert
point for XORs. We cannot pick the XOR binary operator as insert point,
as it is not guaranteed that both input operands for the overflow
intrinsic are defined before it.

This reverts the revert commit
c7fc0e5da6c3c36eb5f3a874a6cdeaedb26856e0.
2020-02-23 18:33:18 +00:00
Craig Topper
9b0c7c26a4 [X86] Regenerate some tests to show FMA4 comments. NFC 2020-02-23 09:55:53 -08:00
Sanjay Patel
63607bdd89 [SDAG] fold fsub -0.0, undef to undef rather than NaN
A question about this behavior came up on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2020-February/139003.html
...and as part of backend improvements in D73978.

We decided not to implement a more general change that would have
folded any FP binop with nearly arbitrary constant + undef operand
to undef because that is not theoretically correct (even if it is
practically correct).

This is the SDAG-equivalent to the IR change in D74713.
2020-02-23 11:36:53 -05:00
Florian Hahn
d3ea77fff6 [DSE] Track overlapping stores.
Add a map from BasicBlocks to overlap intervals. For partial writes, we
can keep track of those in IOLs. We only add candidates that are valid
for eliminations.

Reviewers: dmgreen, bryant, asbirlea, Tyker

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D73757
2020-02-23 15:44:40 +00:00
Nuno Lopes
db1959b9b6 [NFC] fix test nan value 2020-02-23 12:42:47 +00:00
Craig Topper
34d1292d80 [X86] Add sse2 command lines to sse-intrinsics-fast-isel.ll.
The extra available vector types on sse2 causes us to produce
different code.
2020-02-22 22:40:17 -08:00
Craig Topper
c29b9bade2 [X86] Add AddToWorklist(N) after calls to SimplifyDemandedBits/SimplifyDemandedVectorElts that are called on an operand of N.
If a simplication occurs the operand will be added to the worklist.
But since the demanded mask was based on N, we need to make sure
we revisit N in case there are more simplifications to be done.
Returning SDValue(N, 0) as we do, only tells DAG combine that
something changed, but that won't make it add anything to the
worklist.

Found while playing around with using VEXTRACT_STORE in more cases.
But I guess this doesn't affect any of our existing tests.
2020-02-22 21:42:59 -08:00
Craig Topper
838aaca549 [X86] Teach EltsFromConsecutiveLoads that it's ok to form a v4f32 VZEXT_LOAD with a 64 bit memory size on SSE1 targets.
We can use MOVLPS which will load 64 bits, but we need a v4f32
result type. We already have isel patterns for this.

The code here is a little hacky. We can probably improve it with
more isel patterns.
2020-02-22 18:50:52 -08:00
Craig Topper
36e410555e [X86] Use movlps for i64 atomic stores on 32-targets with sse1.
This is similar to using movd which we do for sse2 targets.

I've added a DAG combine for VEXTRACT_STORE to use SimplifyDemandedVectorElts
to clean up some artifacts from type legalization.
2020-02-22 18:22:47 -08:00
Lang Hames
b6c9039962 [ORC] Update LLJIT to automatically run specially named initializer functions.
The GenericLLVMIRPlatformSupport class runs a transform on all LLVM IR added to
the LLJIT instance to replace instances of llvm.global_ctors with a specially
named function that runs the corresponing static initializers (See
(GlobalCtorDtorScraper from lib/ExecutionEngine/Orc/LLJIT.cpp). This patch
updates the GenericIRPlatform class to check for this specially named function
in other materialization units that are added to the JIT and, if found, add
the function to the initializer work queue. Doing this allows object files
that were compiled from IR and cached to be reloaded in subsequent JIT sessions
without their initializers being skipped.

To enable testing this patch also updates the lli tool's -jit-kind=orc-lazy mode
to respect the -enable-cache-manager and -object-cache-dir options, and modifies
the CompileOnDemandLayer to rename extracted submodules to include a hash of the
names of their symbol definitions. This allows a simple object caching scheme
based on module names (which was already implemented in lli) to work with the
lazy JIT.
2020-02-22 11:49:14 -08:00