1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 11:02:59 +02:00
Commit Graph

35672 Commits

Author SHA1 Message Date
Volkan Keles
cc6119f1d3 [GlobalISel] LegalizationArtifactCombiner: Combine aext([asz]ext x) -> [asz]ext x
Summary:
Replace `aext([asz]ext x)` with `aext/sext/zext x` in order to
reduce the number of instructions generated to clean up some
legalization artifacts.

Reviewers: aditya_nandakumar, dsanders, aemerson, bogner

Reviewed By: aemerson

Subscribers: rovka, kristof.beyls, javed.absar, llvm-commits

Differential Revision: https://reviews.llvm.org/D54174

llvm-svn: 347893
2018-11-29 18:19:24 +00:00
Serge Guelton
060a664bf5 Avoid redundant reference to isPodLike in SmallVect/Optional implementation
NFC, preparatory work for isPodLike cleaning.

Differential Revision: https://reviews.llvm.org/D55005

llvm-svn: 347890
2018-11-29 17:21:54 +00:00
Teresa Johnson
525ee88977 [ThinLTO] Import local variables from the same module as caller
Summary:
We can sometimes end up with multiple copies of a local variable that
have the same GUID in the index. This happens when there are local
variables with the same name that are in different source files having the
same name/path at compile time (but compiled into different bitcode objects).

In this case make sure we import the copy in the caller's module.
This enables importing both of the variables having the same GUID
(but which will have different promoted names since the module paths,
and therefore the module hashes, will be distinct).

Importing the wrong copy is particularly problematic for read only
variables, since we must import them as a local copy whenever
referenced. Otherwise we get undefs at link time.

Note that the llvm-lto.cpp and ThinLTOCodeGenerator changes are needed
for testing the distributed index case via clang, which will be sent as
a separate clang-side patch shortly. We were previously not doing the
dead code/read only computation before computing imports when testing
distributed index generation (like it was for testing importing and
other ThinLTO mechanisms alone).

Reviewers: evgeny777

Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith, dang, llvm-commits

Differential Revision: https://reviews.llvm.org/D55047

llvm-svn: 347886
2018-11-29 17:02:42 +00:00
Hans Wennborg
31b5fc8481 Revert r347823 "[TextAPI] Switch back to a custom Platform enum."
It broke the Windows buildbots, e.g.
http://lab.llvm.org:8011/builders/llvm-clang-lld-x86_64-scei-ps4-windows10pro-fast/builds/21829/steps/test/logs/stdio

This also reverts the follow-ups: r347824, r347827, and r347836.

llvm-svn: 347874
2018-11-29 15:47:24 +00:00
David Stuttard
c65201ef09 Add support for TFE/LWE in image intrinsics
TFE and LWE support requires extra result registers that are written in the
event of a failure in order to detect that failure case.
The specific use-case that initiated these changes is sparse texture support.

This means that if image intrinsics are used with either option turned on, the
programmer must ensure that the return type can contain all of the expected
results. This can result in redundant registers since the vector size must be a
power-of-2.

This change takes roughly 6 parts:
1. Modify the instruction defs in tablegen to add new instruction variants that
can accomodate the extra return values.
2. Updates to lowerImage in SIISelLowering.cpp to accomodate setting TFE or LWE
(where the bulk of the work for these instruction types is now done)
3. Extra verification code to catch cases where intrinsics have been used but
insufficient return registers are used.
4. Modification to the adjustWritemask optimisation to account for TFE/LWE being
enabled (requires extra registers to be maintained for error return value).
5. An extra pass to zero initialize the error value return - this is because if
the error does not occur, the register is not written and thus must be zeroed
before use. Also added a new (on by default) option to ensure ALL return values
are zero-initialized that is required for sparse texture support.
6. Disable the inst_combine optimization in the presence of tfe/lwe (later TODO
for this to re-enable and handle correctly).

There's an additional fix now to avoid a dmask=0

For an image intrinsic with tfe where all result channels except tfe
were unused, I was getting an image instruction with dmask=0 and only a
single vgpr result for tfe. That is incorrect because the hardware
assumes there is at least one vgpr result, plus the one for tfe.

Fixed by forcing dmask to 1, which gives the desired two vgpr result
with tfe in the second one.

The TFE or LWE result is returned from the intrinsics using an aggregate
type. Look in the test code provided to see how this works, but in essence IR
code to invoke the intrinsic looks as follows:

%v = call {<4 x float>,i32} @llvm.amdgcn.image.load.1d.v4f32i32.i32(i32 15,
                                      i32 %s, <8 x i32> %rsrc, i32 1, i32 0)
%v.vec = extractvalue {<4 x float>, i32} %v, 0
%v.err = extractvalue {<4 x float>, i32} %v, 1

Differential revision: https://reviews.llvm.org/D48826

Change-Id: If222bc03642e76cf98059a6bef5d5bffeda38dda
llvm-svn: 347871
2018-11-29 15:21:13 +00:00
Petr Pavlu
a73160fd34 [GlobalISel] Make EnableGlobalISel always set when GISel is enabled
Change meaning of TargetOptions::EnableGlobalISel. The flag was
previously set only when a target switched on GlobalISel but it is now
always set when the GlobalISel pipeline is enabled. This makes the flag
consistent with TargetOptions::EnableFastISel and allows its use in
other parts of the compiler to determine when GlobalISel is enabled.

The EnableGlobalISel flag had previouly only one use in
TargetPassConfig::isGlobalISelAbortEnabled(). The method used its value
to determine if GlobalISel was enabled by a target and returned false in
such a case. To preserve the current behaviour, a new flag
TargetOptions::GlobalISelAbort is introduced to separately record the
abort behaviour.

Differential Revision: https://reviews.llvm.org/D54518

llvm-svn: 347861
2018-11-29 12:56:32 +00:00
Andrea Di Biagio
a900121de4 [llvm-mca][MC] Add the ability to declare which processor resources model load/store queues (PR36666).
This patch adds the ability to specify via tablegen which processor resources
are load/store queue resources.

A new tablegen class named MemoryQueue can be optionally used to mark resources
that model load/store queues.  Information about the load/store queue is
collected at 'CodeGenSchedule' stage, and analyzed by the 'SubtargetEmitter' to
initialize two new fields in struct MCExtraProcessorInfo named `LoadQueueID` and
`StoreQueueID`.  Those two fields are identifiers for buffered resources used to
describe the load queue and the store queue.
Field `BufferSize` is interpreted as the number of entries in the queue, while
the number of units is a throughput indicator (i.e. number of available pickers
for loads/stores).

At construction time, LSUnit in llvm-mca checks for the presence of extra
processor information (i.e. MCExtraProcessorInfo) in the scheduling model.  If
that information is available, and fields LoadQueueID and StoreQueueID are set
to a value different than zero (i.e. the invalid processor resource index), then
LSUnit initializes its LoadQueue/StoreQueue based on the BufferSize value
declared by the two processor resources.

With this patch, we more accurately track dynamic dispatch stalls caused by the
lack of LS tokens (i.e. load/store queue full). This is also shown by the
differences in two BdVer2 tests. Stalls that were previously classified as
generic SCHEDULER FULL stalls, are not correctly classified either as "load
queue full" or "store queue full".

About the differences in the -scheduler-stats view: those differences are
expected, because entries in the load/store queue are not released at
instruction issue stage. Instead, those are released at instruction executed
stage.  This is the main reason why for the modified tests, the load/store
queues gets full before PdEx is full.

Differential Revision: https://reviews.llvm.org/D54957

llvm-svn: 347857
2018-11-29 12:15:56 +00:00
Juergen Ributzka
ab5d0f9da8 [TextAPI] Switch back to a custom Platform enum.
Moving to PlatformType from BinaryFormat had some UB fallout when handing
unknown platforms or malformed input files.

This should fix the sanitizer bots.

llvm-svn: 347836
2018-11-29 05:56:03 +00:00
Kristina Brooks
284bf85c95 Add Hurd target to LLVMSupport (1/2)
Add the required target triples to LLVMSupport to support Hurd
in LLVM (formally `pc-hurd-gnu`).

Patch by sthibaul (Samuel Thibault)

Differential Revision: https://reviews.llvm.org/D54378

llvm-svn: 347832
2018-11-29 03:23:01 +00:00
Juergen Ributzka
b9dbc3bb62 [TextAPI] TBD Reader/Writer (bot fixes)
Trying if switching from a vector to an array will appeas the bots.

llvm-svn: 347824
2018-11-29 01:55:57 +00:00
Juergen Ributzka
ab3e0147c2 [TextAPI] TBD Reader/Writer
Add basic infrastructure for reading and writting TBD files (version 1 - 3).

The TextAPI library is not used by anything yet (besides the unit tests). Tool
support will be added in a separate commit.

The TBD format is currently documented in the implementation file (TextStub.cpp).

https://reviews.llvm.org/D53945

Update: This contains changes to fix issues discovered by the bots:
 - add parentheses to silence warnings.
 - rename variables
 - use PlatformType from BinaryFormat
llvm-svn: 347823
2018-11-29 01:20:46 +00:00
Juergen Ributzka
00497177ea Revert "[TextAPI] TBD Reader/Writer"
Reverting to unbreak bots.

llvm-svn: 347809
2018-11-28 21:38:28 +00:00
Juergen Ributzka
0261cead1a [TextAPI] TBD Reader/Writer
Add basic infrastructure for reading and writting TBD files (version 1 - 3).

The TextAPI library is not used by anything yet (besides the unit tests). Tool
support will be added in a separate commit.

The TBD format is currently documented in the implementation file (TextStub.cpp).

https://reviews.llvm.org/D53945

llvm-svn: 347808
2018-11-28 21:27:00 +00:00
Paul Robinson
f2c09f1782 [DebugInfo] IR/Bitcode changes for DISubprogram flags.
Packing the flags into one bitcode word will save effort in
adding new flags in the future.

Differential Revision: https://reviews.llvm.org/D54755

llvm-svn: 347806
2018-11-28 21:14:32 +00:00
David Spickett
5541d58ceb Fix build of r347741 by adding missing vector
include to ARMTargetParser.h.

llvm-svn: 347748
2018-11-28 12:05:36 +00:00
Francis Visoiu Mistrih
6683b9c236 [CodeGen][NFC] Make TII::getMemOpBaseImmOfs return a base operand
Currently, instructions doing memory accesses through a base operand that is
not a register can not be analyzed using `TII::getMemOpBaseRegImmOfs`.

This means that functions such as `TII::shouldClusterMemOps` will bail
out on instructions using an FI as a base instead of a register.

The goal of this patch is to refactor all this to return a base
operand instead of a base register.

Then in a separate patch, I will add FI support to the mem op clustering
in the MachineScheduler.

Differential Revision: https://reviews.llvm.org/D54846

llvm-svn: 347746
2018-11-28 12:00:20 +00:00
Simon Atanasyan
200e7ee47a [DebugInfo] Rename EmitDebugThreadLocal back to EmitDebugValue. NFC
This reverts r294500. DwarfCompileUnit::addAddressExpr uses DIEExpr
for PCOffset. In that case the expression is unrelated to thread locals
and so emitting a value of the DIEExpr does not have to always mean
emit-debug-thread-local.

llvm-svn: 347744
2018-11-28 11:48:07 +00:00
David Spickett
c8642ec8b2 [ARM, AArch64] Move ARM/AArch64 target parsers into
separate files to enable future changes.
    
This moves ARM and AArch64 target parsing into their
own files. They are still accessible through 
TargetParser.h as before.

Several functions in AArch64 which were just forwarders to ARM
have been removed. All except AArch64::getFPUName were unused,
and that was only used in a test. Which itself was overlapping
one in ARM, so it has also been removed.

Differential revision: https://reviews.llvm.org/D53980

llvm-svn: 347741
2018-11-28 11:38:10 +00:00
Reid Kleckner
e8ca9ef5bc [PDB] Add symbol records in bulk
Summary:
This speeds up linking clang.exe/pdb with /DEBUG:GHASH by 31%, from
12.9s to 9.8s.

Symbol records are typically small (16.7 bytes on average), but we
processed them one at a time. CVSymbol is a relatively "large" type. It
wraps an ArrayRef<uint8_t> with a kind an optional 32-bit hash, which we
don't need. Before this change, each DbiModuleDescriptorBuilder would
maintain an array of CVSymbols, and would write them individually with a
BinaryItemStream.

With this change, we now add symbols that happen to appear contiguously
in bulk. For each .debug$S section (roughly one per function), we
allocate two copies, one for relocation, and one for realignment
purposes. For runs of symbols that go in the module stream, which is
most symbols, we now add them as a single ArrayRef<uint8_t>, so the
vector DbiModuleDescriptorBuilder is roughly linear in the number of
.debug$S sections (O(# funcs)) instead of the number of symbol records
(very large).

Some stats on symbol sizes for the curious:
  PDB size: 507M
  sym bytes: 316,508,016
  sym count:  18,954,971
  sym byte avg: 16.7

As future work, we may be able to skip copying symbol records in the
linker for realignment purposes if we make LLVM write them aligned into
the object file. We need to double check that such symbol records are
still compatible with link.exe, but if so, it's definitely worth doing,
since my profile shows we spend 500ms in memcpy in the symbol merging
code. We could potentially cut that in half by saving a copy.
Alternatively, we could apply the relocations *after* we iterate the
symbols. This would require some careful re-engineering of the
relocation processing code, though.

Reviewers: zturner, aganea, ruiu

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D54554

llvm-svn: 347687
2018-11-27 19:00:23 +00:00
Craig Topper
f88d85bd02 [X86] Add cascade lake arch in X86 target.
This is skylake-avx512 with the addition of avx512vnni ISA.

Patch by Jianping Chen

Differential Revision: https://reviews.llvm.org/D54785

llvm-svn: 347681
2018-11-27 18:05:00 +00:00
James Dennett
3ce09e1b66 Documentation: add \file markup as needed.
This makes Doxygen correctly associate the doc comment with the current
file rather than adding to the documentation for namespace llvm.

llvm-svn: 347679
2018-11-27 17:53:03 +00:00
Pavel Labath
9101881e58 [Demangle] remove itaniumFindTypesInMangledName
Summary:
This (very specialized) function was added to enable an LLDB use case.
Now that a more generic interface (overriding of parser functions -
D52992)  is available, and LLDB has been converted to use that (D54074),
the function is unused and can be removed.

Reviewers: erik.pilkington, sgraenitz, rsmith

Subscribers: mgorny, hiraditya, christof, libcxx-commits, llvm-commits

Differential Revision: https://reviews.llvm.org/D54893

llvm-svn: 347670
2018-11-27 16:11:24 +00:00
Vitaly Buka
3ac8f18932 [stack-safety] Empty local passes for Stack Safety Global Analysis
Reviewers: eugenis, vlad.tsyrklevich

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D54541

llvm-svn: 347610
2018-11-26 23:05:48 +00:00
Vitaly Buka
14e2ad1d95 [stack-safety] Local analysis implementation
Summary:
Analysis produces StackSafetyInfo which contains information with how allocas
and parameters were used in functions.

From prototype by Evgenii Stepanov and  Vlad Tsyrklevich.

Reviewers: eugenis, vlad.tsyrklevich, pcc, glider

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D54504

llvm-svn: 347603
2018-11-26 21:57:59 +00:00
Vitaly Buka
1d2c594b1b [stack-safety] Empty local passes for Stack Safety Local Analysis
Reviewers: eugenis, vlad.tsyrklevich

Subscribers: mgorny, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D54502

llvm-svn: 347602
2018-11-26 21:57:47 +00:00
Teresa Johnson
388284330d [ThinLTO] Consolidate cache key computation between new/old LTO APIs
Summary:
The old legacy LTO API had a separate cache key computation, which was
a subset of the cache key computation in the new LTO API (from what I
can tell this is largely just because certain features such as CFI,
dsoLocal, etc are only utilized via the new LTO API). However, having
separate computations is unnecessary (much of the code is duplicated),
and can lead to bugs when adding new optimizations if both cache
computation algorithms aren't updated properly - it's much easier to
maintain if we have a single facility.

This patch refactors the old LTO API code to use the cache key
computation from the new LTO API. To do this, we set up an lto::Config
object and fill in the fields that the old LTO was hashing (the others
will just use the defaults).

There are two notable changes:
- I added a Freestanding flag to the LTO Config. Currently this is only
used by the legacy LTO API. In the patch that added it (D30791) I had
asked about adding it to the new LTO API, but it looks like that was not
addressed. This should probably be discussed as a follow up to this
change, as it is orthogonal.
- The legacy LTO API had some code that was hashing the GUID of all
preserved symbols defined in the module. I looked back at the history of
this (which was added with the original hashing in the legacy LTO API in
D18494), and there is a comment in the review thread that it was added
in preparation for future internalization. We now do the internalization
of course, and that is handled in the new LTO API cache key computation
by hashing the recorded linkage type of all defined globals. Therefore I
didn't try to move over and keep the preserved symbols handling.

Reviewers: steven_wu, pcc

Subscribers: mehdi_amini, inglorion, eraman, dexonsmith, dang, llvm-commits

Differential Revision: https://reviews.llvm.org/D54635

llvm-svn: 347592
2018-11-26 20:40:37 +00:00
Than McIntosh
19bdd4a2bb [CodeGen] Support custom format of stack maps
Summary:
Add a hook to the GCMetadataPrinter for emitting stack maps in
custom format. The hook will be called at stack map generation
time. The default stack map format is used if there is no hook.

For this to be useful a few data structures and accessors are
exposed from the StackMaps class, so the custom printer can
access the stack map data.

This patch authored by Cherry Zhang <cherryyz@google.com>.

Reviewers: thanm, apilipenko, reames

Reviewed By: reames

Subscribers: reames, apilipenko, nemanjai, javed.absar, kbarton, jsji, llvm-commits

Differential Revision: https://reviews.llvm.org/D53892

llvm-svn: 347584
2018-11-26 18:43:48 +00:00
Fedor Sergeev
79ff5dcabd Revert "[TTI] Reduction costs only need to include a single extract element cost"
This reverts commit r346970.
It was causing PR39774, a crash in slp-vectorizer on a rather simple loop
with just a bunch of 'and's in the body.

llvm-svn: 347541
2018-11-26 10:17:27 +00:00
Argyrios Kyrtzidis
27a03b19f4 [Support/FileSystem] Add sub-second precision for atime/mtime of sys::fs::file_status on unix platforms
Summary:
getLastAccessedTime() and getLastModificationTime() provided times in nanoseconds but with only 1 second resolution, even when the underlying file system could provide more precise times than that.
These changes add sub-second precision for unix platforms that support improved precision.

Also add some comments to make sure people are aware that the resolution of times can vary across different file systems.

Reviewers: labath, zturner, aaron.ballman, kristina

Reviewed By: aaron.ballman, kristina

Subscribers: lebedev.ri, mgorny, kristina, llvm-commits

Differential Revision: https://reviews.llvm.org/D54826

llvm-svn: 347530
2018-11-26 00:03:39 +00:00
Sanjay Patel
dbb1da3cf0 [x86] limit transform for select-of-fp-constants
This should likely be adjusted to limit this transform
further, but these diffs should be clear wins.

If we have blendv/conditional move, then we should assume 
those are cheap ops. The loads become independent of the
compare, so those can be speculated before we need to use 
the values in the blend/mov.

llvm-svn: 347526
2018-11-25 17:27:02 +00:00
Sanjay Patel
af6354ebf9 [SelectionDAG] move constant or splat functions to common location
rL347502 moved the null sibling, so we should group all of these
together. I'm not sure why these aren't methods of the SDValue
class itself, but that's another patch if that's possible.

llvm-svn: 347523
2018-11-25 16:09:32 +00:00
Joel Jones
78f772fee0 Revert unapproved commit
llvm-svn: 347511
2018-11-24 07:26:55 +00:00
Joel Jones
74a51d699d [AArch64] Enable libm vectorized functions via SLEEF
This changeset is modeled after Intel's submission for SVML. It enables
trigonometry functions vectorization via SLEEF: http://sleef.org/.

 * A new vectorization library enum is added to TargetLibraryInfo.h: SLEEF.
 * A new option is added to TargetLibraryInfoImpl - ClVectorLibrary: SLEEF.
 * A comprehensive test case is included in this changeset.
 * In a separate changeset (for clang), a new vectorization library argument is
   added to -fveclib: -fveclib=SLEEF.

Trigonometry functions that are vectorized by sleef:

acos
asin
atan
atanh
cos
cosh
exp
exp2
exp10
lgamma
log10
log2
log
sin
sinh
sqrt
tan
tanh
tgamma

Patch by Stefan Teleman
Differential Revision: https://reviews.llvm.org/D53927

llvm-svn: 347510
2018-11-24 06:41:39 +00:00
Evandro Menezes
c46bcdb565 [TableGen] Emit more variant transitions
`llvm-mca` relies on the predicates to be based on `MCSchedPredicate` in order
to resolve the scheduling for variant instructions.  Otherwise, it aborts
the building of the instruction model early.

However, the scheduling model emitter in `TableGen` gives up too soon, unless
all processors use only such predicates.

In order to allow more processors to be used with `llvm-mca`, this patch
emits scheduling transitions if any processor uses these predicates.  The
transition emitted for the processors using legacy predicates is the one
specified with `NoSchedPred`, which is based on `MCSchedPredicate`.

Preferably, `llvm-mca` should instead assume a reasonable default when a
variant transition is not based on `MCSchedPredicate` for a given processor.
This issue should be revisited in the future.

Differential revision: https://reviews.llvm.org/D54648

llvm-svn: 347504
2018-11-23 21:17:33 +00:00
Sanjay Patel
99ff4d9983 [DAG] consolidate shift simplifications
...and use them to avoid creating obviously undef values as
discussed in the post-commit thread for r347478.

The diffs in vector div/rem show that we were missing real
optimizations by creating bogus shift nodes.

llvm-svn: 347502
2018-11-23 20:05:12 +00:00
Luke Cheeseman
610670577d Revert r347490 as it breaks address sanitizer builds
llvm-svn: 347499
2018-11-23 17:13:06 +00:00
Luke Cheeseman
30437c2af7 Revert r343341
- Cannot reproduce the build failure locally and the build logs have
  been deleted.

llvm-svn: 347490
2018-11-23 11:01:47 +00:00
Fangrui Song
5985e4d0db [Object] Also treat STB_GNU_UNIQUE symbols as exported to other DSO
All of STB_GLOBAL/STB_WEAK/STB_GNU_UNIQUE are treated as export symbols, see:

glibc/elf/dl-lookup.c:do_lookup_x
musl/ldso/dynlink.c OK_BINDS

Though ld.so does not read binding, the currently used STV_DEFAULT or STV_PROTECTED is a good emulation of linker behavior.

llvm-svn: 347481
2018-11-23 01:33:19 +00:00
Chandler Carruth
4933a85499 [TI removal] Leverage the fact that TerminatorInst is gone to create
a normal base class that provides all common "call" functionality.

This merges two complex CRTP mixins for the common "call" logic and
common operand bundle logic into a single, normal base class of
`CallInst` and `InvokeInst`. Going forward, users can typically
`dyn_cast<CallBase>` and use the resulting API. No more need for the
`CallSite` wrapper. I'm planning to migrate current usage of the wrapper
to directly use the base class and then it can be removed, but those are
simpler and much more incremental steps. The big change is to introduce
this abstraction into the type system.

I've tried to do some basic simplifications of the APIs that I couldn't
really help but touch as part of this:
- I've tried to organize the attribute API and bundle API into groups to
  make understanding the API of `CallBase` easier. Without this,
  I wasn't able to navigate the API sanely for all of the ways I needed
  to modify it.
- I've added what seem like more clear and consistent APIs for getting
  at the called operand. These ended up being especially useful to
  consolidate the *numerous* duplicated code paths trying to do this.
- I've largely reworked the organization and implementation of the APIs
  for computing the argument operands as they needed to change to work
  with the new subclass approach.

To minimize any cost associated with this abstraction, I've moved the
operand layout in memory to store the called operand last. This makes
its position relative to the end of the operand array the same,
regardless of the subclass. It should make it much cheaper to reference
from the `CallBase` abstraction, and this is likely one of the most
frequent things to query.

We do still pay one abstraction penalty here: we have to branch to
determine whether there are 0 or 2 extra operands when computing the end
of the argument operand sequence. However, that seems both rare and
should optimize well. I've implemented this in a way specifically
designed to allow it to optimize fairly well. If this shows up in
profiles, we can add overrides of the relevant methods to the subclasses
that bypass this penalty. It seems very unlikely that this will be an
issue as the code was *already* dealing with an ever present abstraction
of whether or not there are operand bundles, so this isn't the first
branch to go into the computation.

I've tried to remove as much of the obvious vestigial API surface of the
old CRTP implementation as I could, but I suspect there is further
cleanup that should now be possible, especially around the operand
bundle APIs. I'm leaving all of that for future work in this patch as
enough things are changing here as-is.

One thing that made this harder for me to reason about and debug was the
pervasive use of unsigned values in subtraction and other arithmetic
computations. I had to debug more than one unintentional wrap. I've
switched a few of these to use `int` which seems substantially simpler,
but I've held back from doing this more broadly to avoid creating
confusing divergence within a single class's API.

I also worked to remove all of the magic numbers used to index into
operands, putting them behind named constants or putting them into
a single method with a comment and strictly using the method elsewhere.
This was necessary to be able to re-layout the operands as discussed
above.

Thanks to Ben for reviewing this (somewhat large and awkward) patch!

Differential Revision: https://reviews.llvm.org/D54788

llvm-svn: 347452
2018-11-22 10:31:35 +00:00
Eric Fiselier
155ac280ff [LLVM] Allow modulemap installation
Summary:
Currently we can't install the modulemaps provided by LLVM, since they are not structured to support headers generated as part of the build (ex. `llvm/IR/Attributes.gen`).
This patch restructures the module maps in order to support installation.

Modules containing generated headers are defined in the new `module.extern.modulemap` file, and are referenced from the main `module.modulemap` using `extern module`. There are two versions of the `module.extern.modulemap` file; one used when building and another, `module.install.modulemap`, which is re-named during installation.

Users can opt-into module map installation using `-DLLVM_INSTALL_MODULEMAPS=ON`.  The default value is `OFF` due to llvm.org/PR31905.

Reviewers: rsmith, mehdi_amini, bruno, EricWF

Reviewed By: EricWF

Subscribers: tschuett, chapuni, mgorny, llvm-commits

Differential Revision: https://reviews.llvm.org/D53510

llvm-svn: 347420
2018-11-21 20:46:50 +00:00
Vladimir Stefanovic
d431e5c490 [MC] Support labels as offsets in .reloc directive
Currently, expressions like

  .reloc 1f, R_MIPS_JALR, foo
  1: nop

are not allowed, ie. an offset in .reloc can only be absolute value.
This patch adds support for labels as offsets.
If offset is a forward declared label, MCObjectStreamer keeps the fixup locally
and adds it to the fixups vector after the label (and its offset) is defined.
label+number is not supported yet.

Differential revision: https://reviews.llvm.org/D53990

llvm-svn: 347397
2018-11-21 16:28:39 +00:00
Mikael Holmen
177d678846 [PM] Port Scalarizer to the new pass manager.
Patch by: markus (Markus Lavin)

Reviewers: chandlerc, fedor.sergeev

Reviewed By: fedor.sergeev

Subscribers: llvm-commits, Ka-Ka, bjope

Differential Revision: https://reviews.llvm.org/D54695

llvm-svn: 347392
2018-11-21 14:00:17 +00:00
Zachary Turner
e7be680ca1 Fix pointer options mask. It was off by 1 bit.
llvm-svn: 347359
2018-11-20 22:53:40 +00:00
Zachary Turner
ab1a02de19 [CodeView] Add support for ref-qualified member functions.
When you have a member function with a ref-qualifier, for example:

struct Foo {
  void Func() &;
  void Func2() &&;
};

clang-cl was not emitting this information. Doing so is a bit
awkward, because it's not a property of the LF_MFUNCTION type, which
is what you'd expect. Instead, it's a property of the this pointer
which is actually an LF_POINTER. This record has an attributes
bitmask on it, and our handling of this bitmask was all wrong. We
had some parts of the bitmask defined incorrectly, but importantly
for this bug, we didn't know about these extra 2 bits that represent
the ref qualifier at all.

Differential Revision: https://reviews.llvm.org/D54667

llvm-svn: 347354
2018-11-20 22:13:43 +00:00
Zachary Turner
bac1f14341 [CodeView] RelocPtr points to little endian data.
Don't use a uint32_t*, use a ulittle32_t* to make this correct
on big endian systems.

Patch by James Clarke
Differential Revision: https://reviews.llvm.org/D54421

llvm-svn: 347349
2018-11-20 21:30:11 +00:00
Sanjay Patel
18470cf8db [APInt] Add methods for saturated add and sub
This adds the sadd_sat, uadd_sat, ssub_sat, usub_sat methods for performing saturating additions and subtractions to APInt.

Split out from D54237.

Patch by: nikic (Nikita Popov)

Differential Revision: https://reviews.llvm.org/D54332

llvm-svn: 347324
2018-11-20 16:47:59 +00:00
Sanjay Patel
b3121b5992 [PatternMatch] Handle undef vectors consistently
This patch fixes the issue noticed in D54532. 
The problem is that cst_pred_ty-based matchers like m_Zero() currently do not match 
scalar undefs (as expected), but *do* match vector undefs. This may lead to optimization 
inconsistencies in rare cases.

There is only one existing test for which output changes, reverting the change from D53205. 
The reason here is that vector fsub undef, %x is no longer matched as an m_FNeg(). While I 
think that the new output is technically worse than the previous one, it is consistent with 
scalar, and I don't think it's really important either way (generally that undef should have 
been folded away prior to reassociation.)

I've also added another test case for this issue based on InstructionSimplify. It took some 
effort to find that one, as in most cases undef folds are either checked first -- and in the 
cases where they aren't it usually happens to not make a difference in the end. This is the 
only case I was able to come up with. Prior to this patch the test case simplified to undef 
in the scalar case, but zeroinitializer in the vector case.

Patch by: @nikic (Nikita Popov)

Differential Revision: https://reviews.llvm.org/D54631

llvm-svn: 347318
2018-11-20 16:08:19 +00:00
Vedant Kumar
12d084996f [IR] Add hasNPredecessors, hasNPredecessorsOrMore to BasicBlock
Add methods to BasicBlock which make it easier to efficiently check
whether a block has N (or more) predecessors.

This can be more efficient than using pred_size(), which is a linear
time operation.

We might consider adding similar methods for successors. I haven't done
so in this patch because succ_size() is already O(1).

With this patch applied, I measured a 0.065% compile-time reduction in
user time for running `opt -O3` on the sqlite3 amalgamation (30 trials).
The change in mergeStoreIntoSuccessor alone saves 45 million linked list
iterations in a stage2 Release build of llc.

See llvm.org/PR39702 for a harder but more general way of achieving
similar results.

Differential Revision: https://reviews.llvm.org/D54686

llvm-svn: 347256
2018-11-19 19:54:27 +00:00
Roman Lebedev
ac842bc3f1 [IR] DISubprogram::toSPFlags(): fix "enumeral and non-enumeral type in conditional expression"
/build/llvm/include/llvm/IR/DebugInfoMetadata.h: In static member function ‘static llvm::DISubprogram::DISPFlags llvm::DISubprogram::toSPFlags(bool, bool, bool, unsigned int)’:
/build/llvm/include/llvm/IR/DebugInfoMetadata.h:1636:50: warning: enumeral and non-enumeral type in conditional expression [-Wextra]
                                   (IsLocalToUnit ? SPFlagLocalToUnit : 0) |
                                    ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~
/build/llvm/include/llvm/IR/DebugInfoMetadata.h:1637:49: warning: enumeral and non-enumeral type in conditional expression [-Wextra]
                                   (IsDefinition ? SPFlagDefinition : 0) |
                                    ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
/build/llvm/include/llvm/IR/DebugInfoMetadata.h:1638:48: warning: enumeral and non-enumeral type in conditional expression [-Wextra]
                                   (IsOptimized ? SPFlagOptimized : 0));
                                    ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~

llvm-svn: 347250
2018-11-19 19:07:03 +00:00
Simon Pilgrim
0bf12484bc Add missing closing bracket.
llvm-svn: 347247
2018-11-19 18:54:34 +00:00