VerifyDAGDiverence costs compilation time, avoid running it in non-debug
builds.
Differential Revision: https://reviews.llvm.org/D52454
llvm-svn: 343086
This broke Chromium's Android build (https://crbug.com/889390) and the
polly-aosp buildbot
(http://lab.llvm.org:8011/builders/aosp-O3-polly-before-vectorizer-unprofitable).
> Originally committed in rL342210 but was reverted in rL342260 because
> it was causing issues in vectorized code, because I had forgotten to
> ensure that we're operating on scalar values.
>
> Original commit message:
>
> On failing to find sequences that can be converted into dual macs,
> try to find sequential 16-bit loads that are used by muls which we
> can then use smultb, smulbt, smultt with a wide load.
>
> Differential Revision: https://reviews.llvm.org/D51983
llvm-svn: 343082
Since the body of the "else if" contains
// TODO
I suppose someone will need the variable again at some point, but with
-Werror the warning made it not compile at all.
llvm-svn: 343071
for lazy compilation, rather than a callback manager.
The new mechanism does not block compile threads, and does not require
function bodies to be renamed.
Future modifications should allow laziness on a per-module basis to work
without any modification of the input module.
llvm-svn: 343065
In some senario, LLVM will remove llvm.dbg.labels in IR. For example,
when the labels are in unreachable blocks, these labels will not
be generated in LLVM IR. In the case, these debug labels will have
address zero as their address. It is not legal address for debugger to
set breakpoints or query sources. So, the patch inhibits the address info
(DW_AT_low_pc) of removed labels.
Fix build failed in BuildBot, clang-stage1-cmake-RA-incremental, on macOS.
Differential Revision: https://reviews.llvm.org/D51908
llvm-svn: 343062
implementation as lazy compile callbacks, and a "lazy re-exports" utility that
builds lazy call-throughs.
Lazy call-throughs are similar to lazy compile callbacks (and are based on the
same underlying state saving/restoring trampolines) but resolve their targets
by performing a standard ORC lookup rather than invoking a user supplied
compiler callback. This allows them to inherit the thread-safety of ORC lookups
while blocking only the calling thread (whereas compile callbacks also block one
compile thread).
Lazy re-exports provide a simple way of building lazy call-throughs. Unlike a
regular re-export, a lazy re-export generates a new address (a stub entry point)
that will act like the re-exported symbol when called. The first call via a
lazy re-export will trigger compilation of the re-exported symbol before calling
through to it.
llvm-svn: 343061
This will allow trampoline pools to be re-used for a new lazy-reexport utility
that generates looks up function bodies using the standard symbol lookup process
(rather than using a user provided compile function). This new utility provides
the same capabilities (since MaterializationUnits already allow user supplied
compile functions to be run) as JITCompileCallbackManager, but can use the new
asynchronous lookup functions to avoid blocking a compile thread.
This patch also updates createLocalCompileCallbackManager to return an error if
a callback manager can not be created, and updates clients of that API to
account for the change. Finally, the OrcCBindingsStack is updates so that if
a callback manager is not available for the target platform a valid stack
(without support for lazy compilation) can still be constructed.
llvm-svn: 343059
LLJIT and LLLazyJIT can now be constructed with an optional NumCompileThreads
arguments. If this is non-zero then a thread-pool will be created with the
given number of threads, and compile tasks will be dispatched to the thread
pool.
To enable testing of this feature, two new flags are added to lli:
(1) -compile-threads=N (N = 0 by default) controls the number of compile threads
to use.
(2) -thread-entry can be used to execute code on additional threads. For each
-thread-entry argument supplied (multiple are allowed) a new thread will be
created and the given symbol called. These additional thread entry points are
called after static constructors are run, but before main.
llvm-svn: 343058
compilation of IR in the JIT.
ThreadSafeContext is a pair of an LLVMContext and a mutex that can be used to
lock that context when it needs to be accessed from multiple threads.
ThreadSafeModule is a pair of a unique_ptr<Module> and a
shared_ptr<ThreadSafeContext>. This allows the lifetime of a ThreadSafeContext
to be managed automatically in terms of the ThreadSafeModules that refer to it:
Once all modules using a ThreadSafeContext are destructed, and providing the
client has not held on to a copy of shared context pointer, the context will be
automatically destructed.
This scheme is necessary due to the following constraits: (1) We need multiple
contexts for multithreaded compilation (at least one per compile thread plus
one to store any IR not currently being compiled, though one context per module
is simpler). (2) We need to free contexts that are no longer being used so that
the JIT does not leak memory over time. (3) Module lifetimes are not
predictable (modules are compiled as needed depending on the flow of JIT'd
code) so there is no single point where contexts could be reclaimed.
JIT clients not using concurrency can safely use one ThreadSafeContext for all
ThreadSafeModules.
JIT clients who want to be able to compile concurrently should use a different
ThreadSafeContext for each module, or call setCloneToNewContextOnEmit on their
top-level IRLayer. The former reduces compile latency (since no clone step is
needed) at the cost of additional memory overhead for uncompiled modules (as
every uncompiled module will duplicate the LLVM types, constants and metadata
that have been shared).
llvm-svn: 343055
This reverts commit bd7b44f35ee9fbe365eb25ce55437ea793b39346.
Reland r342994: disabled the optimization and explicitly enable it in test.
-mllvm -consthoist-min-num-to-rebase<unsigned>=0
[ConstHoist] Do not rebase single (or few) dependent constant
If an instance (InsertionPoint or IP) of Base constant A has only one or few
rebased constants depending on it, do NOT rebase. One extra ADD instruction is
required to materialize each rebased constant, assuming A and the rebased have
the same materialization cost.
Differential Revision: https://reviews.llvm.org/D52243
llvm-svn: 343053
Summary:
Lowers (s|u)itofp and fpto(s|u)i instructions for vectors. The fp to
int conversions produce poison values if their arguments are out of
the convertible range, so a future CL will have to add an LLVM
intrinsic to make the saturating behavior of this conversion usable.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D52372
llvm-svn: 343052
This removes an int->fp bitcast between the surrounding code and the movmsk. I had already added a hack to combineMOVMSK to try to look through this bitcast to improve the SimplifyDemandedBits there.
But I found an additional issue where the bitcast was preventing combineMOVMSK from being called again after earlier nodes in the DAG are optimized. The bitcast gets revisted, but not the user of the bitcast. By using integer types throughout, the bitcast doesn't get in the way.
llvm-svn: 343046
These IR patterns represent the exact behavior of a movmsk instruction using (zext (bitcast (icmp slt X, 0))).
For the v4i32/v8i32/v2i64/v4i64 we currently emit a PCMPGT for the icmp slt which is unnecessary since we only care about the sign bit of the result. This is because of the int->fp bitcast we put on the input to the movmsk nodes for these cases. I'll be fixing this in a future patch.
llvm-svn: 343045
switch RTDyldObjectLinkingLayer2 to use it.
RuntimeDyld::loadObject is currently a blocking operation. This means that any
JIT'd code whose call-graph contains an embedded complete K graph will require
at least K threads to link, which precludes the use of a fixed sized thread
pool for concurrent JITing of arbitrary code (whatever K the thread-pool is set
at, any code with a K+1 complete subgraph will deadlock at JIT-link time).
To address this issue, this commmit introduces a function called jitLinkForORC
that uses continuation-passing style to pass the fix-up and finalization steps
to the asynchronous symbol resolver interface so that linking can be performed
without blocking.
llvm-svn: 343043
If the fsub in this pattern was replaced by an actual fneg
instruction, we would need to add a fold to recognize that
because fneg would not be a binop.
llvm-svn: 343041
Summary:
We generate s_xor to lower add of i1s in general cases, and s_not to
lower add with a one-bit imm of -1 (true).
Reviewers:
rampitec
Differential Revision:
https://reviews.llvm.org/D52518
llvm-svn: 343030
Summary:
We are overly conservative in loop vectorizer with respect to stores to loop
invariant addresses.
More details in https://bugs.llvm.org/show_bug.cgi?id=38546
This is the first part of the fix where we start with vectorizing loop invariant
values to loop invariant addresses.
This also includes changes to ORE for stores to invariant address.
Reviewers: anemet, Ayal, mkuper, mssimpso
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D50665
llvm-svn: 343028
The Lexer doesn't use this state itself. It is only set and used by AsmParser so it seems like it should just be part of AsmParser.
Differential Revision: https://reviews.llvm.org/D52515
llvm-svn: 343027
Summary:
In D49565/r337503, the type id record writing was fixed so that only
referenced type ids were emitted into each per-module index for ThinLTO
distributed builds. However, this still left an efficiency issue: each
per-module index checked all type ids for membership in the referenced
set, yielding O(M*N) performance (M indexes and N type ids).
Change the TypeIdMap in the summary to be indexed by GUID, to facilitate
correlating with type identifier GUIDs referenced in the function
summary TypeIdInfo structures. This allowed simplifying other
places where a map from type id GUID to type id map entry was previously
being used to aid this correlation.
Also fix AsmWriter code to handle the rare case of type id GUID
collision.
For a large internal application, this reduced the thin link time by
almost 15%.
Reviewers: pcc, vitalybuka
Subscribers: mehdi_amini, inglorion, steven_wu, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D51330
llvm-svn: 343021
Summary:
We have `llvm::addLandingPadInfo` and `MachineFunction::addLandingPad`,
both of which add landing pad information to populate `LandingPadInfo`
but are called from different locations, which was confusing. This patch
unifies them with one `MachineFunction::addLandingPad` function, which
now has functionlities of both functions.
Reviewers: rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D52428
llvm-svn: 343018
a header in support.
MSVC's std::future implementation requires types to be default constructible,
but Error and Expected are not. This issue came up once before in ORC's
RPCUtils.h header and was worked around there but came up again in r342939, so
I am moving the workaround to Support to make it available to other clients.
llvm-svn: 343011
This is the final (I hope!) problem pattern mentioned in PR37749:
https://bugs.llvm.org/show_bug.cgi?id=37749
We are trying to avoid an AVX1 sinkhole caused by having 256-bit bitwise logic ops but no other 256-bit integer ops.
We've already solved the simple logic ops, but 'andn' is an x86 special. I looked at alternative solutions like
extending the generic DAG combine or trying to wait until the ANDNP node is created, but those are bigger patches
that can over-reach. Ie, splitting to 128-bit does not look like a win in most cases with >1 256-bit op.
The pattern matching is cluttered with bitcasts because of our i64 element canonicalization. For the affected test,
we have this vector-type-legalized sequence:
t29: v8i32 = concat_vectors t27, t28
t30: v4i64 = bitcast t29
t18: v8i32 = BUILD_VECTOR Constant:i32<-1>, Constant:i32<-1>, ...
t31: v4i64 = bitcast t18
t32: v4i64 = xor t30, t31
t9: v8i32 = BUILD_VECTOR Constant:i32<255>, Constant:i32<255>, ...
t34: v4i64 = bitcast t9
t35: v4i64 = and t32, t34
t36: v8i32 = bitcast t35
t37: v4i32 = extract_subvector t36, Constant:i64<0>
t38: v4i32 = extract_subvector t36, Constant:i64<4>
Differential Revision: https://reviews.llvm.org/D52318
llvm-svn: 343008
For the AMDGPU target if a MBB contains exec mask restore preamble, SplitEditor may get state when it cannot insert a spill instruction.
E.g. for a MIR
bb.100:
%1 = S_OR_SAVEEXEC_B64 %2, implicit-def $exec, implicit-def $scc, implicit $exec
and if the regalloc will try to allocate a virtreg to the physreg already assigned to virtreg %1, it should insert spill instruction before the S_OR_SAVEEXEC_B64 instruction.
But it is not possible since can generate incorrect code in terms of exec mask.
The change makes regalloc to ignore such physreg candidates.
Reviewed By: rampitec
Differential Revision: https://reviews.llvm.org/D52052
llvm-svn: 343004
If an instance (InsertionPoint or IP) of Base constant A has only one or few
rebased constants depending on it, do NOT rebase. One extra ADD instruction is
required to materialize each rebased constant, assuming A and the rebased have
the same materialization cost.
Differential Revision: https://reviews.llvm.org/D52243
llvm-svn: 342994
The motivating case from:
https://bugs.llvm.org/show_bug.cgi?id=33026
...has no shuffles now. This kind of pattern may occur during
vectorization when targets have lumpy ISAs like SSE/AVX.
llvm-svn: 342988
Share predecessor search bookkeeping in both perform PostLD1Combine
and performNEONPostLDSTCombine. This should be approximately a 4x and
2x performance improvement.
llvm-svn: 342986