1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-22 18:54:02 +01:00
Commit Graph

183247 Commits

Author SHA1 Message Date
Lang Hames
44eb111f3c [ORC] Refactor definition-generation, add a generator for static libraries.
This patch replaces the JITDylib::DefinitionGenerator typedef with a class of
the same name, and adds support for attaching a sequence of DefinitionGeneration
objects to a JITDylib.

This patch also adds a new definition generator,
StaticLibraryDefinitionGenerator, that can be used to add symbols fom a static
library to a JITDylib. An object from the static library will be added (via
a supplied ObjectLayer reference) whenever a symbol from that object is
referenced.

To enable testing, lli is updated to add support for the --extra-archive option
when running in -jit-kind=orc-lazy mode.

llvm-svn: 368707
2019-08-13 16:05:18 +00:00
Matt Arsenault
3482b4fef4 GlobalISel: Add more verifier checks for G_SHUFFLE_VECTOR
llvm-svn: 368705
2019-08-13 15:52:21 +00:00
Matt Arsenault
284e8e1c63 GlobalISel: Change representation of shuffle masks
Currently shufflemasks get emitted as any other constant, and you end
up with a bunch of virtual registers of G_CONSTANT with a
G_BUILD_VECTOR. The AArch64 selector then asserts on anything that
doesn't fit this pattern. This isn't an ideal representation, and
should avoid legalization and have fewer opportunities for a
representational error.

Rather than invent a new shuffle mask operand type, similar to what
ShuffleVectorSDNode does, just track the original IR Constant mask
operand. I don't completely like the idea of adding another link to
the IR, but MIR is already quite dependent on IR constants already,
and this will allow sharing the shuffle mask utility functions with
the IR.

llvm-svn: 368704
2019-08-13 15:34:38 +00:00
Roman Lebedev
07aca076e2 [CodeGen][SelectionDAG] More efficient code for X % C == 0 (SREM case)
Summary:
This implements an optimization described in Hacker's Delight 10-17:
when `C` is constant, the result of `X % C == 0` can be computed
more cheaply without actually calculating the remainder.
The motivation is discussed here: https://bugs.llvm.org/show_bug.cgi?id=35479.

One huge caveat: this signed case is only valid for positive divisors.

While we can freely negate negative divisors, we can't negate `INT_MIN`,
so for now if `INT_MIN` is encountered, we bailout.
As a follow-up, it should be possible to handle that more gracefully
via extra `and`+`setcc`+`select`.

This passes llvm's test-suite, and from cursory(!) cross-examination
the folds (the assembly) match those of GCC, and manual checking via alive
did not reveal any issues (other than the `INT_MIN` case)

Reviewers: RKSimon, spatel, hermord, craig.topper, xbolva00

Reviewed By: RKSimon, xbolva00

Subscribers: xbolva00, thakis, javed.absar, hiraditya, dexonsmith, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65366

llvm-svn: 368702
2019-08-13 14:57:37 +00:00
Roman Lebedev
f5296f1970 [TargetLowering][NFC] prepareUREMEqFold(): fixup comment
The comment initially matched the code, but the code was incorrect
and was fixed after the initial revert back back when it was introduced,
but the comment was never updated.

llvm-svn: 368701
2019-08-13 14:57:08 +00:00
Hubert Tong
f9207ae22b Revert r368691; test checked in without changes by accident
llvm-svn: 368699
2019-08-13 14:43:34 +00:00
Jordan Rupprecht
3abe0b3ab9 [llvm-readelf] Implement note parsing for NT_FILE and unknown descriptors
Summary:
This patch implements two note parsers; one for NT_FILE coredumps, e.g.:

```
  CORE                  0x00000080      NT_FILE (mapped files)
    Page size: 4096
                 Start                 End         Page Offset
    0x0000000000001000  0x0000000000002000  0x0000000000003000
        /path/to/a.out
    0x0000000000004000  0x0000000000005000  0x0000000000006000
        /path/to/libc.so
    0x0000000000007000  0x0000000000008000  0x0000000000009000
        [stack]
```

(A more realistic example can be tested locally by creating a crashing program and running `llvm-readelf -n core`)

And also implements a raw hex dump for unknown descriptor data for unhandled descriptor types.

Reviewers: MaskRay, jhenderson, grimar, alexshap

Reviewed By: MaskRay, grimar

Subscribers: emaste, llvm-commits, labath

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65832

llvm-svn: 368698
2019-08-13 14:38:45 +00:00
Simon Pilgrim
68d1415581 Fix -Wdocumentation warning (@returns used in void function). NFCI.
llvm-svn: 368693
2019-08-13 13:55:38 +00:00
Hubert Tong
1e00e2024f [AIX] Implement LR prolog/epilog save/restore
Summary:
This patch fixes the offsets of fields in the stack frame linkage save
area for AIX.

Reviewers: sfertile, hubert.reinterpretcast, jasonliu, Xiangling_L, xingxue, ZarkoCA, daltenty

Reviewed By: hubert.reinterpretcast

Subscribers: wuzish, nemanjai, hiraditya, kbarton, MaskRay, jsji, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64424

Patch by Chris Bowler!

llvm-svn: 368691
2019-08-13 13:38:24 +00:00
Roman Lebedev
0648d2150a [InstCombine] Non-canonical clamp-like pattern handling
Summary:
Given a pattern like:
```
%old_cmp1 = icmp slt i32 %x, C2
%old_replacement = select i1 %old_cmp1, i32 %target_low, i32 %target_high
%old_x_offseted = add i32 %x, C1
%old_cmp0 = icmp ult i32 %old_x_offseted, C0
%r = select i1 %old_cmp0, i32 %x, i32 %old_replacement
```
it can be rewritten as more canonical pattern:
```
%new_cmp1 = icmp slt i32 %x, -C1
%new_cmp2 = icmp sge i32 %x, C0-C1
%new_clamped_low = select i1 %new_cmp1, i32 %target_low, i32 %x
%r = select i1 %new_cmp2, i32 %target_high, i32 %new_clamped_low
```
Iff `-C1 s<= C2 s<= C0-C1`
Also, `ULT` predicate can also be `UGE`; or `UGT` iff `C0 != -1` (+invert result)
Also, `SLT` predicate can also be `SGE`; or `SGT` iff `C2 != INT_MAX` (+invert result)

If `C1 == 0`, then all 3 instructions must be one-use; else at most either `%old_cmp1` or `%old_x_offseted` can have extra uses.
NOTE: if we could reuse `%old_cmp1` as one of the comparisons we'll have to build, this could be less limiting.

So there are two icmp's, each one with 3 predicate variants, so there are 9 fold variants:

|     | ULT                            | UGE                             | UGT                             |
| SLT | https://rise4fun.com/Alive/yIJ | https://rise4fun.com/Alive/5BfN | https://rise4fun.com/Alive/INH  |
| SGE | https://rise4fun.com/Alive/hd8 | https://rise4fun.com/Alive/Abk  | https://rise4fun.com/Alive/PlzS |
| SGT | https://rise4fun.com/Alive/VYG | https://rise4fun.com/Alive/oMY  | https://rise4fun.com/Alive/KrzC |
{F9730206}

This fold was brought up in https://reviews.llvm.org/D65148#1603922 by @dmgreen, and is needed to unblock that patch.
This patch requires D65530.

Reviewers: spatel, nikic, xbolva00, dmgreen

Reviewed By: spatel

Subscribers: hiraditya, llvm-commits, dmgreen

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65765

llvm-svn: 368687
2019-08-13 12:49:28 +00:00
Roman Lebedev
c08ef12796 [InstCombine][NFC] Rename IsFreeToInvert() -> isFreeToInvert() for consistency
As per https://reviews.llvm.org/D65530#inline-592325

llvm-svn: 368686
2019-08-13 12:49:16 +00:00
Roman Lebedev
a6a7f2817b [InstCombine] foldXorOfICmps(): don't give up on non-single-use ICmp's if all users are freely invertible
Summary:
This is rather unconventional..

As the comment there says, we don't have much folds for xor-of-icmps,
we try to turn them into an and-of-icmps, for which we have plenty of folds.
But if the ICmp we need to invert is not single-use - we give up.

As discussed in https://reviews.llvm.org/D65148#1603922,
we may have a non-canonical CLAMP pattern, with bit match and
select-of-threshold that we'll potentially clamp.
As it can be seen in `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`,
out of all 8 variations of the pattern, only two are **not** canonicalized into
the variant with and+icmp instead of bit math.
The reason is because the ICmp we need to invert is not single-use - we give up.

We indeed can't perform this fold at will, the general rule is that
we should not increase instruction count in InstCombine,

But we wouldn't end up increasing instruction count if we can adapt every other
user to the inverted value. This way the `not` we create **will** get folded,
and in the end the instruction count did not increase.

For that, of course, we need to look at the users of a Value,
which is again rather unconventional for InstCombine :S

Thus i'm proposing to be a little bit more insistive in `foldXorOfICmps()`.
The alternatives would be to not create that `not`, but add duplicate code to
manually invert all users; or to add some even less general combine to handle
some more specific pattern[s].

Reviewers: spatel, nikic, RKSimon, craig.topper

Reviewed By: spatel

Subscribers: hiraditya, jdoerfert, dmgreen, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65530

llvm-svn: 368685
2019-08-13 12:49:06 +00:00
George Rimar
cb83c675d9 [llvm-readobj] - Remove 'error(Error EC)' helper.
We do not need it. I replaced it with
reportError(StringRef Input, Error Err).

Differential revision: https://reviews.llvm.org/D66011

llvm-svn: 368677
2019-08-13 12:07:41 +00:00
Nico Weber
94809595b4 gn build: Extract git() and git_out() functions in sync script
llvm-svn: 368671
2019-08-13 11:48:15 +00:00
Nico Weber
0c4a5b4fb3 gn build: Merge r368630
llvm-svn: 368668
2019-08-13 11:32:54 +00:00
Nico Weber
9f1b5e2f87 gn build: Give cmake sync script an opt-in --write flag
Differential Revision: https://reviews.llvm.org/D66101

llvm-svn: 368667
2019-08-13 11:32:45 +00:00
Nico Weber
5b33b4788c gn build: Make sync script group output by revision
Differential Revision: https://reviews.llvm.org/D66090

llvm-svn: 368665
2019-08-13 11:24:20 +00:00
Simon Pilgrim
c8a1b64afb [X86] XFormVExtractWithShuffleIntoLoad - handle shuffle mask scaling
If the target shuffle mask is from a wider type, attempt to scale the mask so that the extraction can attempt to peek through.

Fixes the regression mentioned in rL368662

Reapplying this as rL368308 had to be reverted as part of rL368660 to revert rL368276

llvm-svn: 368663
2019-08-13 11:11:42 +00:00
Simon Pilgrim
6565cdcf48 [X86] SimplifyDemandedVectorElts - attempt to recombine target shuffle using DemandedElts mask (reapplied)
If we don't demand all elements, then attempt to combine to a simpler shuffle.

At the moment we can only do this if Depth == 0 as combineX86ShufflesRecursively uses Depth to track whether the shuffle has really changed or not - we'll need to change this before we can properly start merging combineX86ShufflesRecursively into SimplifyDemandedVectorElts. 

The insertps-combine.ll regression is because XFormVExtractWithShuffleIntoLoad can't see through shuffles of different widths - this will be fixed in a follow-up commit.

Reapplying this as rL368307 had to be reverted as part of rL368660 to revert rL368276

llvm-svn: 368662
2019-08-13 10:51:39 +00:00
Hans Wennborg
f0efe6788d Revert r368276 "[TargetLowering] SimplifyDemandedBits - call SimplifyMultipleUseDemandedBits for ISD::EXTRACT_VECTOR_ELT"
This introduced a false positive MemorySanitizer warning about use of
uninitialized memory in a vectorized crc function in Chromium. That suggests
maybe something is not right with this transformation. See
https://crbug.com/992853#c7 for a reproducer.

This also reverts the follow-up commits r368307 and r368308 which
depended on this.

> This patch attempts to peek through vectors based on the demanded bits/elt of a particular ISD::EXTRACT_VECTOR_ELT node, allowing us to avoid dependencies on ops that have no impact on the extract.
>
> In particular this helps remove some unnecessary scalar->vector->scalar patterns.
>
> The wasm shift patterns are annoying - @tlively has indicated that the wasm vector shift codegen are to be refactored in the near-term and isn't considered a major issue.
>
> Differential Revision: https://reviews.llvm.org/D65887

llvm-svn: 368660
2019-08-13 09:33:25 +00:00
David Bolvansky
7795cffbab [SimplifyLibCalls] Add dereferenceable bytes from known callsites
Summary:
int mm(char *a, char *b) {
    return memcmp(a,b,16);
}

Currently:
define dso_local i32 @mm(i8* nocapture readonly %a, i8* nocapture readonly %b) local_unnamed_addr #1 {
entry:
  %call = tail call i32 @memcmp(i8* %a, i8* %b, i64 16)
  ret i32 %call
}

After patch:
define dso_local i32 @mm(i8* nocapture readonly %a, i8* nocapture readonly %b) local_unnamed_addr #1 {
entry:
  %call = tail call i32 @memcmp(i8* dereferenceable(16)  %a, i8* dereferenceable(16)  %b, i64 16)
  ret i32 %call
}




Reviewers: jdoerfert, efriedma

Reviewed By: jdoerfert

Subscribers: javed.absar, spatel, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D66079

llvm-svn: 368657
2019-08-13 09:11:49 +00:00
Roman Lebedev
c602657b85 [NFC][InstCombine] Non-canonical clamp pattern: non-canonical predicate tests
We can't handle 'uge' case because we can't ever get it,
there needs to be extra use on that compare or else it will be
canonicalized, but because of extra use we can't handle it.

'sge' case we can have.

llvm-svn: 368656
2019-08-13 08:14:13 +00:00
Qiu Chaofan
634e00ad3f [PowerPC] Fix ICE when truncating some vectors
The legalizer would hit an assertion on PowerPC platform when truncating
a vector whose size is not power of 2.  This patch is to add a check to
prevent vectors with such odd-size elements from being custom lowered.

Reviewed By: Hal Finkel

Differential Revision: https://reviews.llvm.org/D65261

llvm-svn: 368654
2019-08-13 07:53:29 +00:00
Amara Emerson
94dad7bb1c [AArch64][GlobalISel] Replace explicit vreg creation with implicit using SrcOp. NFC.
llvm-svn: 368653
2019-08-13 06:55:32 +00:00
Amara Emerson
13cefd7cf8 [GlobalISel] Make the InstructionSelector instance non-const, allowing state to be maintained.
Currently we can't keep any state in the selector object that we get from
subtarget. As a result we have to plumb through all our variables through
multiple functions. This change makes it non-const and adds a virtual init()
method to allow further state to be captured for each target.

AArch64 makes use of this in this patch to cache a call to hasFnAttribute()
which is expensive to call, and is used on each selection of G_BRCOND.

Differential Revision: https://reviews.llvm.org/D65984

llvm-svn: 368652
2019-08-13 06:26:59 +00:00
Serge Pavlov
2dd1bcc226 Added unit tests to check supported rounding modes
Also added fixed misspelled metadata name.

Differential Revision: https://reviews.llvm.org/D66073

llvm-svn: 368650
2019-08-13 05:21:18 +00:00
Aditya Nandakumar
8603d4c930 [GlobalISel]: Add KnownBits for G_XOR
https://reviews.llvm.org/D66119

llvm-svn: 368648
2019-08-13 04:32:33 +00:00
Yevgeny Rouban
52c3784fc6 Verifier: check prof branch_weights
This patch is to check some of constraints on
!pro branch_weights metadata:
https://llvm.org/docs/BranchWeightMetadata.html

Reviewers: asbirlea, reames, chandlerc
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D61179

llvm-svn: 368647
2019-08-13 04:03:38 +00:00
Akira Hatanaka
3381f8fbc6 Do not call replaceAllUsesWith to upgrade calls to ARC runtime functions
to intrinsic calls

This fixes a bug in r368311.

It turns out that the ARC runtime functions in the IR can have pointer
parameter types that are not i8* or i8**. Instead of RAUWing normal
functions with intrinsics, manually bitcast the arguments before passing
them to the intrinsic functions and bitcast the return value back to the
type of the original call instruction.

This recommits r368634, which was reverted in r368637. The loop in the
patch was iterating over uses of a function and deleting function calls
inside it, which caused bots to crash.

rdar://problem/54125406

Differential Revision: https://reviews.llvm.org/D66047

llvm-svn: 368646
2019-08-13 01:23:06 +00:00
Stanislav Mekhanoshin
e62b2c77ca [AMDGPU] Fix msan failure in printf lowering
llvm-svn: 368645
2019-08-13 01:07:27 +00:00
Daniel Sanders
c32fe492af Eliminate implicit Register->unsigned conversions in VirtRegMap. NFC
Summary:
This was mostly an experiment to assess the feasibility of completely
eliminating a problematic implicit conversion case in D61321 in advance of
landing that* but it also happens to align with the goal of propagating the
use of Register/MCRegister instead of unsigned so I believe it makes sense
to commit it.

The overall process for eliminating the implicit conversions from
Register/MCRegister -> unsigned was to:
1. Add an explicit conversion to support genuinely required conversions to
   unsigned. For example, using them as an index for IndexedMap. Sadly it's
   not possible to have an explicit and implicit conversion to the same
   type and only deprecate the implicit one so I called the explicit
   conversion get().
2. Temporarily annotate the implicit conversion to unsigned with
   LLVM_ATTRIBUTE_DEPRECATED to make them visible
3. Eliminate implicit conversions by propagating Register/MCRegister/
   explicit-conversions appropriately
4. Remove the deprecation added in 2.

* My conclusion is that it isn't feasible as there's too much code to
  update in one go.

Depends on D65678

Reviewers: arsenm

Subscribers: MatzeB, wdng, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65685

llvm-svn: 368643
2019-08-13 00:55:24 +00:00
Akira Hatanaka
fb7c9b1277 Revert "Do not call replaceAllUsesWith to upgrade calls to ARC runtime functions"
This reverts commit r368634 because it broke a bot.

llvm-svn: 368637
2019-08-13 00:20:36 +00:00
Eric Christopher
540b42743d Move findBBwithCalls to the file it's used in to avoid unused function
warnings.

llvm-svn: 368636
2019-08-13 00:05:01 +00:00
Akira Hatanaka
b441a22f21 Do not call replaceAllUsesWith to upgrade calls to ARC runtime functions
to intrinsic calls

This fixes a bug in r368311.

It turns out that the ARC runtime functions in the IR can have pointer
parameter types that are not i8* or i8**. Instead of RAUWing normal
functions with intrinsics, manually bitcast the arguments before passing
them to the intrinsic functions and bitcast the return value back to the
type of the original call instruction.

rdar://problem/54125406

llvm-svn: 368634
2019-08-12 23:53:23 +00:00
Stanislav Mekhanoshin
96868dd303 [AMDGPU] removed unused functions from printf lowering
Differential Revision: https://reviews.llvm.org/D66117

llvm-svn: 368633
2019-08-12 23:32:35 +00:00
Reid Kleckner
13cbb221ad [WinEH] Fix catch block parent frame pointer offset
r367088 made it so that funclets store XMM registers into their local
frame instead of storing them to the parent frame. However, that change
forgot to update the parent frame pointer offset for catch blocks. This
change does that.

Fixes crashes when an exception is rethrown in a catch block that saves
XMMs, as described in https://crbug.com/992860.

llvm-svn: 368631
2019-08-12 23:02:00 +00:00
Juergen Ributzka
6ec27202c1 [TextAPI] Fix & Add tests for tbd files version 3.
- There was a simple typo in TextStub code that prevented version 3 files to be read.
- Included a version 3 unit test to handle the differences in the format.
- Also a typo in Error.h inside the comments.

https://reviews.llvm.org/D66041

This patch is from Cyndy Ishida <cyndy_ishida@apple.com>.

llvm-svn: 368630
2019-08-12 23:01:07 +00:00
Daniel Sanders
80f1288e13 [risc-v] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).

Depends on D65919

Reviewers: lenary

Subscribers: jholewinski, MatzeB, qcolombet, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, wdng, nhaehnle, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, javed.absar, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, tpr, PkmX, jocewei, jsji, Petar.Avramovic, asbirlea, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision for full review was: https://reviews.llvm.org/D65962

llvm-svn: 368629
2019-08-12 22:41:02 +00:00
Daniel Sanders
4bbcbcaf46 [aarch64] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).

Manual fixups in:
AArch64InstrInfo.cpp - genFusedMultiply() now takes a Register* instead of unsigned*
AArch64LoadStoreOptimizer.cpp - Ternary operator was ambiguous between Register/MCRegister. Settled on Register

Depends on D65919

Reviewers: aemerson

Subscribers: jholewinski, MatzeB, qcolombet, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, wdng, nhaehnle, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, javed.absar, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, tpr, PkmX, jocewei, jsji, Petar.Avramovic, asbirlea, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision for full review was: https://reviews.llvm.org/D65962

llvm-svn: 368628
2019-08-12 22:40:53 +00:00
Daniel Sanders
bcab1537f6 [webassembly] Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM
Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).

Reviewers: aheejin

Subscribers: jholewinski, MatzeB, qcolombet, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, wdng, nhaehnle, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, javed.absar, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, tpr, PkmX, jocewei, jsji, Petar.Avramovic, asbirlea, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision for whole review: https://reviews.llvm.org/D65962

llvm-svn: 368627
2019-08-12 22:40:45 +00:00
Stanislav Mekhanoshin
29183994d8 [AMDGPU] Use PredicateControl in MIMGBaseOpcode. NFC.
This is infrastructural, will be needed for future work.
For some reason it was only used in MIMG_NoSampler, while
needed everywere we use MIMGBaseOpcode if we want to use
predicates.

Differential Revision: https://reviews.llvm.org/D66115

llvm-svn: 368626
2019-08-12 22:32:21 +00:00
Johannes Doerfert
707cedda13 [Attributor] Use the cached data layout directly
This removes the warning by using the new DL member.
It also simplifies the code.

llvm-svn: 368625
2019-08-12 22:21:09 +00:00
Whitney Tsang
4f7d7c45b5 Title: Fix build warning for operator<< when using GCC 7.
Authored By: etiotto
Differential Revision: https://reviews.llvm.org/D63459

llvm-svn: 368624
2019-08-12 22:20:54 +00:00
Craig Topper
28248b6f86 [X86] Allow combineTruncateWithSat to use pack instructions for i16->i8 without AVX512BW.
We need AVX512BW to be able to truncate an i16 vector. If we don't
have that we have to extend i16->i32, then trunc, i32->i8. But we
won't be able to remove the min/max if we do that. At least not
without more special handling.

llvm-svn: 368623
2019-08-12 22:18:23 +00:00
Johannes Doerfert
20276aec2c [Attributor][NFC] Add IntegerState raw_ostream << operator
llvm-svn: 368622
2019-08-12 22:07:34 +00:00
Johannes Doerfert
4ed70fe678 [Attributor] Make the InformationCache an Attributor member
The functionality is not changed but the interfaces are simplified and
repetition is removed.

llvm-svn: 368621
2019-08-12 22:05:53 +00:00
Aditya Nandakumar
cdccf3fc54 [GISel]: Fix a bug in KnownBits where we should have been using SizeInBits
https://reviews.llvm.org/D66039

We were using getIndexSize instead of getIndexSizeInBits().
Added test case for G_PTRTOINT and G_INTTOPTR.

llvm-svn: 368618
2019-08-12 21:28:12 +00:00
Juergen Ributzka
6999d51166 Revert "Disable MachO TBD write tests for Windows."
The underlying issue was fixed in r357759.

llvm-svn: 368611
2019-08-12 19:51:34 +00:00
Craig Topper
3be53dbcae [X86] Remove unreachable code from LowerTRUNCATE. NFC
All three 256->128 bit cases were already handled above.

Noticed while looking at the coverage report.

llvm-svn: 368609
2019-08-12 19:26:45 +00:00
Craig Topper
28b07efb51 [X86] Add a paranoia type check to the code that detects AVG patterns from truncating stores.
If we're after type legalize, we should make sure we won't create
a store with an illegal type when we separate the AVG pattern
from the truncating store.

I don't know of a way to fail for this today. Just noticed while
I was in the vicinity.

llvm-svn: 368608
2019-08-12 19:26:37 +00:00