1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

147699 Commits

Author SHA1 Message Date
Kostya Serebryany
43072589c3 [sanitizer-coverage] remove some more stale code
llvm-svn: 300778
2017-04-19 22:42:11 +00:00
Evgeniy Stepanov
b788f7a185 Remove two unused variables (-Werror).
llvm-svn: 300777
2017-04-19 22:27:23 +00:00
Craig Topper
a164e5abad [APInt] Cast more calls to add/sub/mul overflow functions to void. I missed the unittests in r300758.
llvm-svn: 300773
2017-04-19 22:11:05 +00:00
Sanjay Patel
fab0adbac6 [DAG] add splat vector support for 'or' in SimplifyDemandedBits
I've changed one of the tests to not fold away, but we didn't and still don't do the transform
that the comment claims we do (and I don't know why we'd want to do that).

Follow-up to:
https://reviews.llvm.org/rL300725
https://reviews.llvm.org/rL300763

llvm-svn: 300772
2017-04-19 22:00:00 +00:00
Kostya Serebryany
91e92ad3e0 [sanitizer-coverage] remove stale code
llvm-svn: 300769
2017-04-19 21:48:09 +00:00
Kostya Serebryany
a48070b1b0 [libFuzzer] remove -output_csv option. It duplicates the default output and got out of sync
llvm-svn: 300768
2017-04-19 21:34:58 +00:00
Sanjay Patel
c44de937c8 [DAG] add splat vector support for 'xor' in SimplifyDemandedBits
This allows forming more 'not' ops, so we get improvements for ISAs that have and-not.

Follow-up to:
https://reviews.llvm.org/rL300725

llvm-svn: 300763
2017-04-19 21:23:09 +00:00
Matthias Braun
ad6c52ac72 ARMFrameLowering: Reserve emergency spill slot for large arguments
Re-commit after revert in r300668. Changed getMaxFPOffset() to a
more conservative heuristic instead of trying to be clever and missing
for some exotic calling conventions.

We need to reserve an emergency spill slot in cases with large argument
types that could overflow immediate offsets for FP relative address
calculations.

rdar://31317893

Differential Revision: https://reviews.llvm.org/D31643

llvm-svn: 300761
2017-04-19 21:11:44 +00:00
Craig Topper
f3d361bf65 [APInt] Cast calls to add/sub/mul overflow methods to void if only their overflow bool out param is used.
This is preparation for a clang change to improve the [[nodiscard]] warning to not be ignored on methods that return a class marked [[nodiscard]] that are defined in the class itself. See D32207.

We should consider adding wrapper methods to APInt that return the overflow flag directly and discard the APInt result. This would eliminate the void casts and the need to create a bool before the call to pass to the out param.

llvm-svn: 300758
2017-04-19 21:09:45 +00:00
Simon Pilgrim
f84a10ee86 [InstCombine] Add frem constant folding test (PR3316)
llvm-svn: 300757
2017-04-19 21:09:19 +00:00
Matt Arsenault
72dd027fd6 AMDGPU: Custom lower illegal small select types
Promote them to i32 vectors to avoid unpacking and re-packing
the vectors.

llvm-svn: 300754
2017-04-19 20:53:07 +00:00
Dehao Chen
49d94bb411 Code style change as suggested in https://reviews.llvm.org/D32177 (NFC)
llvm-svn: 300753
2017-04-19 20:52:21 +00:00
Eli Friedman
7a7eb8d03b [ARM] Remove redundant computeKnownBits helper.
Move the BFI logic to computeKnownBitsForTargetNode, and delete
the redundant CMOV logic.

This is intended as a cleanup, but it's probably possible to construct
a case where moving the BFI logic allows more combines.

Differential Revision: https://reviews.llvm.org/D31795

llvm-svn: 300752
2017-04-19 20:50:57 +00:00
Aditya Nandakumar
528682bdac [GISEL]: Move getConstantVReg to Utils
NFCI

llvm-svn: 300751
2017-04-19 20:48:50 +00:00
Simon Pilgrim
1db704ce33 [InstCombine] Add frem constant folding test (PR32177)
llvm-svn: 300750
2017-04-19 20:47:58 +00:00
Eli Friedman
226d22a6a5 [ARM] Use TableGen patterns to select vtbl. NFC.
Differential Revision: https://reviews.llvm.org/D32103

llvm-svn: 300749
2017-04-19 20:39:39 +00:00
Craig Topper
a8f60dd7ea [APInt] Use SignExtend64 instead of reinventing it. NFC
llvm-svn: 300747
2017-04-19 20:32:11 +00:00
Eli Friedman
c5a8782468 [SCEV] Make SCEV or modeling more aggressive.
Use haveNoCommonBitsSet to figure out whether an "or" instruction
is equivalent to addition. This handles more cases than just
checking for a constant on the RHS.

Differential Revision: https://reviews.llvm.org/D32239

llvm-svn: 300746
2017-04-19 20:19:58 +00:00
Dehao Chen
5b504da948 Using address range map to speedup finding inline stack for address.
Summary:
In the current implementation, to find inline stack for an address incurs expensive linear search in 2 places:

* linear search for the top-level DIE
* recursive linear traverse the DIE tree to find the path to the leaf DIE

In this patch, a map is built from address to its corresponding leaf DIE. The inline stack is built by traversing from the leaf DIE up to the root DIE. This speeds up batch symbolization by ~10X without noticible memory overhead.

Reviewers: dblaikie

Reviewed By: dblaikie

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32177

llvm-svn: 300742
2017-04-19 20:09:38 +00:00
Dehao Chen
3208b9d0b7 Update the madd.ll test with utils/update_llc_test_checks.py (NFC)
llvm-svn: 300740
2017-04-19 20:08:14 +00:00
Dehao Chen
84ece8088b PR32710: Disable using PMADDWD for unsigned short.
Summary: PMADDWD can only handle signed short.

Reviewers: mkuper, wmi

Reviewed By: mkuper

Subscribers: andreadb, llvm-commits

Differential Revision: https://reviews.llvm.org/D32236

llvm-svn: 300737
2017-04-19 19:50:34 +00:00
Matt Arsenault
43d2e9a492 AMDGPU: Don't emit amd_kernel_code_t for callable functions
This is inserted directly in the text section. The relocation
for the function ends up resolving to the beginning of the
amd_kernel_code_t header rather than the actual function
entry point.

Also skip some of the comments for initialization
that only makes sense for kernels.

llvm-svn: 300736
2017-04-19 19:38:10 +00:00
Aditya Nandakumar
3ace9dee64 [tblgen] GCC/MS builtin to target intrisics map.
Patch by Ettore Speziale

Allow TableGen to generate static functions to perform GCC/MS builtin name to
target specific intrinsic ID mapping.

https://reviews.llvm.org/D31150

llvm-svn: 300735
2017-04-19 19:14:20 +00:00
Artem Tamazov
84d0c8fabe [AMDGPU][mc][tests][NFC] Update bulk ISA tests for Gfx7 and Gfx8
Added approx. 1100 gfx7 and 1040 gfx8 test cases.

llvm-svn: 300734
2017-04-19 19:12:06 +00:00
Matt Arsenault
7acc7fbaea StructurizeCFG: Directly invert cmp instructions
The most common case for a branch condition is
a single use compare. Directly invert the branch
predicate rather than adding a lot of xor i1 true
which the DAG will have to fold later.

This produces nicer to read structurizer output.

This produces some random changes in codegen
due to the DAG swapping branch conditions itself,
and then does a poor job of dealing with those
inverts.

llvm-svn: 300732
2017-04-19 18:29:07 +00:00
Sanjoy Das
8762f07b17 [GVN] Don't coerce non-integral pointers to integers or vice versa
Summary:
See http://llvm.org/docs/LangRef.html#non-integral-pointer-type

The NewGVN test does not fail without these changes (perhaps it does
try to coerce pointers <-> integers to begin with?), but I added the
test case anyway.

Reviewers: dberlin

Subscribers: mcrosier, llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D32208

llvm-svn: 300730
2017-04-19 18:21:09 +00:00
Richard Smith
85cb2e7413 Update comment to match r300252.
llvm-svn: 300728
2017-04-19 18:17:51 +00:00
Tim Northover
ed63cc7fe3 ARM: TLS calling convention doesn't preserve r9 or r12 on Darwin.
llvm-svn: 300726
2017-04-19 18:07:54 +00:00
Sanjay Patel
cdbefd846d [DAG] add splat vector support for 'and' in SimplifyDemandedBits
The patch itself is simple: stop discriminating against vectors in visitAnd() and again in 
SimplifyDemandedBits().

Some notes for reference:

1. We're not consistent about calls to SimplifyDemandedBits in the various visitXXX functions. 
   Sometimes, we check if the RHS is a constant first. Other times (like here), we just dive in.
2. I'd like to break the vector shackles in steps for the sake of risk minimization, but we could
    make similar simultaneous changes in other places if we think that would be better.
3. I don't know what the intent of the changed tests in this patch was supposed to be, but since 
   they wiggled in a positive way, I'm just going with that. :)
4. In the rotate tests, note that we can see through non-splat constants. This is a result of D24253.
5. My motivation for being here now is to make D31944 look better, so this is step 1 of N towards 
   improving the vector codegen in that patch without writing any actual new code.

Differential Revision: https://reviews.llvm.org/D32230

llvm-svn: 300725
2017-04-19 18:05:06 +00:00
Peter Collingbourne
b809198bf3 IR: Remove some comments that are documenting the obvious. NFC.
llvm-svn: 300724
2017-04-19 18:00:05 +00:00
Benjamin Kramer
ccb13ded57 [MathExtras] Fix undefined behavior (shift by bit width)
While there add some unit tests for uint64_t. Found by ubsan.

llvm-svn: 300721
2017-04-19 17:46:15 +00:00
Matt Arsenault
33960d0165 AMDGPU: Don't align callable functions to 256
llvm-svn: 300720
2017-04-19 17:42:39 +00:00
Matt Arsenault
b3625be5da AMDGPU: Change DivergenceAnalysis for function arguments
Stop assuming all functions are kernels.

llvm-svn: 300719
2017-04-19 17:42:34 +00:00
Reid Kleckner
8c8fd1ce41 Prefer addAttr(Attribute::AttrKind) over the AttributeList overload
This should simplify the call sites, which typically want to tweak one
attribute at a time. It should also avoid creating ephemeral
AttributeLists that live forever.

llvm-svn: 300718
2017-04-19 17:28:52 +00:00
Davide Italiano
a1e2f0586e [InstCombine] Reduce visitLoadInst() code duplication. NFCI.
llvm-svn: 300717
2017-04-19 17:26:57 +00:00
Craig Topper
23e714e783 [APInt] Move the 'return *this' from the slow cases of assignment operators inline. We should let the compiler see that the fast/slow cases both return *this.
I don't think we chain assignments together very often so this shouldn't matter much.

llvm-svn: 300715
2017-04-19 17:01:58 +00:00
Sanjay Patel
219e228d73 [InstSimplify] fold identity shuffles (recursing if needed)
This patch simplifies the examples from D31509 and D31927 (PR30630) and catches 
the basic identity shuffle tests that Zvi recently added.

I'm not sure if we have something like this in DAGCombiner, but we should?

It's worth noting that "MaxRecurse / RecursionLimit" is only 3 on entry at the moment. 
We might want to bump that up if there are longer shuffle chains like this in the wild.

For now, we're ignoring shuffles that have undef mask elements because it's not
clear how those should be handled.

Differential Revision: https://reviews.llvm.org/D31960

llvm-svn: 300714
2017-04-19 16:48:22 +00:00
Sanjay Patel
6f797a6f8c use 'auto' with 'dyn_cast' and fix formatting; NFC
llvm-svn: 300713
2017-04-19 16:22:19 +00:00
Zachary Turner
8bfdd12a0f Add an #include for <climits> for CHAR_BIT.
llvm-svn: 300711
2017-04-19 15:50:43 +00:00
Zachary Turner
1e49b0266c [Support] Add some helpers to generate bitmasks.
Frequently you you want a bitmask consisting of a specified
number of 1s, either at the beginning or end of a word.

The naive way to do this is to write

template<typename T>
T leadingBitMask(unsigned N) {
  return (T(1) << N) - 1;
}

but using this function you cannot produce a word with every
bit set to 1 (i.e. leadingBitMask<uint8_t>(8)) because left
shift is undefined when N is greater than or equal to the
number of bits in the word.

This patch provides an efficient, branch-free implementation
that works for all values of N in [0, CHAR_BIT*sizeof(T)]

Differential Revision: https://reviews.llvm.org/D32212

llvm-svn: 300710
2017-04-19 15:45:31 +00:00
Dehao Chen
5f580343ee Revert r300697 which causes buildbot failure.
llvm-svn: 300708
2017-04-19 15:28:58 +00:00
Krzysztof Parzyszek
c71372b507 [Hexagon] Generate proper offset in opt-addr-mode
Also, make a few changes to allow using the pass in .mir testcases.
Among other things, change the abbreviation from opt-amode to amode-opt,
because otherwise lit would expand the "opt" part to the full path to
the opt binary.

llvm-svn: 300707
2017-04-19 15:15:51 +00:00
Krzysztof Parzyszek
4eeff51c37 [Hexagon] Remove RDefMap, use Liveness:getNearestAliasedRef instead
llvm-svn: 300706
2017-04-19 15:14:30 +00:00
Krzysztof Parzyszek
620e6598d9 [RDF] Switch NodeList to SmallVector from std::vector
The list has a single element 75+% of the time, reservation of 4 elements
is sufficient in 95% of cases.

llvm-svn: 300705
2017-04-19 15:12:44 +00:00
Krzysztof Parzyszek
9e99b2a72f [RDF] Use faster version of findBlock
llvm-svn: 300704
2017-04-19 15:11:23 +00:00
Krzysztof Parzyszek
24f0ab7329 [RDF] Cache register units for reg masks instead of recalculating them
llvm-svn: 300702
2017-04-19 15:10:09 +00:00
Krzysztof Parzyszek
5be03deae1 [Hexagon] Cache reached blocks in bit tracker instead of scanning list
llvm-svn: 300701
2017-04-19 15:08:31 +00:00
Sanjay Patel
6c96884dab [PowerPC] add test and auto-generate checks; NFC
llvm-svn: 300700
2017-04-19 14:58:09 +00:00
Sanjay Patel
d1717171e7 [ARM] add test and auto-generate checks; NFC
llvm-svn: 300698
2017-04-19 14:55:50 +00:00
Dehao Chen
2f4933a44d Using address range map to speedup finding inline stack for address.
Summary:
In the current implementation, to find inline stack for an address incurs expensive linear search in 2 places:

* linear search for the top-level DIE
* recursive linear traverse the DIE tree to find the path to the leaf DIE

In this patch, a map is built from address to its corresponding leaf DIE. The inline stack is built by traversing from the leaf DIE up to the root DIE. This speeds up batch symbolization by ~10X without noticible memory overhead.

Reviewers: dblaikie

Reviewed By: dblaikie

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D32177

llvm-svn: 300697
2017-04-19 14:50:57 +00:00