1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 04:22:57 +02:00
Commit Graph

2589 Commits

Author SHA1 Message Date
David Callahan
8669fd34a9 Modify df_iterator to support post-order actions
Summary: This makes a change to the state used to maintain visited information for depth first iterator. We know assume a method "completed(...)" which is called after all children of a node have been visited. In all existing cases, this method does nothing so this patch has no functional changes.  It will however allow a client to distinguish back from cross edges in a DFS tree.

Reviewers: nadav, mehdi_amini, dberlin

Subscribers: MatzeB, mzolotukhin, twoh, freik, llvm-commits

Differential Revision: https://reviews.llvm.org/D25191

llvm-svn: 283391
2016-10-05 21:36:16 +00:00
Sanjay Patel
5e8e039088 fix documentation comments; NFC
llvm-svn: 283361
2016-10-05 18:51:12 +00:00
Andrey Bokhanko
196c2e59b2 Fix IntegerType::MAX_INT_BITS value
IntegerType::MAX_INT_BITS is apparently not in sync with Type::SubclassData
size. This patch fixes this.

Differential Revision: https://reviews.llvm.org/D24814

llvm-svn: 283215
2016-10-04 12:43:46 +00:00
Sanjoy Das
26178861cc [ConstantRange] Make getEquivalentICmp smarter
This change teaches getEquivalentICmp to be smarter about generating
ICMP_NE and ICMP_EQ predicates.

An earlier version of this change was landed as rL283057 which had a
use-after-free bug.  This new version has a fix for that bug, and a (C++
unittests/) test case that would have triggered it rL283057.

llvm-svn: 283078
2016-10-02 20:59:05 +00:00
Sanjoy Das
48cc33630a Remove duplicated code; NFC
ICmpInst::makeConstantRange does exactly the same thing as
ConstantRange::makeExactICmpRegion.

llvm-svn: 283059
2016-10-02 00:09:57 +00:00
Mehdi Amini
356d607cc1 Use StringRef in Datalayout API (NFC)
llvm-svn: 283013
2016-10-01 05:57:55 +00:00
Mehdi Amini
74130c0750 DIFlags: use StringRef instead of raw pointer (NFC)
llvm-svn: 283012
2016-10-01 05:57:50 +00:00
Mehdi Amini
b9d860c7ab Revert "Use StringRef in Datalayout API (NFC)"
This reverts commit r283009. Bots are broken.

llvm-svn: 283011
2016-10-01 05:12:48 +00:00
Mehdi Amini
72c57fa04f Use StringRef in Datalayout API (NFC)
llvm-svn: 283009
2016-10-01 04:17:59 +00:00
Mehdi Amini
1fef2dd6b7 Use StringRef in Pass/PassManager APIs (NFC)
llvm-svn: 283004
2016-10-01 02:56:57 +00:00
Piotr Padlewski
40ebe4972e NFC fix doxygen comments
llvm-svn: 282950
2016-09-30 21:05:49 +00:00
Adam Nemet
d6eee273b7 [LoopDataPrefetch] Port to new streaming API for opt remarks
llvm-svn: 282826
2016-09-30 00:42:43 +00:00
Adam Nemet
abdb6ed3d0 [LAA, LV] Port to new streaming interface for opt remarks. Update LV
(Recommit after making sure IsVerbose gets properly initialized in
DiagnosticInfoOptimizationBase.  See previous commit that takes care of
this.)

OptimizationRemarkAnalysis directly takes the role of the report that is
generated by LAA.

Then we need the magic to be able to turn an LAA remark into an LV
remark.  This is done via a new OptimizationRemark ctor.

llvm-svn: 282813
2016-09-30 00:01:30 +00:00
Adam Nemet
a367afa8b2 [Diag] Use non-static member initializer for IsVerbose. NFC
llvm-svn: 282812
2016-09-30 00:01:27 +00:00
Adam Nemet
3b79489357 Revert "[LAA, LV] Port to new streaming interface for opt remarks. Update LV"
This reverts commit r282758.

There are some clang failures I haven't seen.

llvm-svn: 282759
2016-09-29 20:17:37 +00:00
Adam Nemet
6a1c0c8759 [LAA, LV] Port to new streaming interface for opt remarks. Update LV
OptimizationRemarkAnalysis directly takes the role of the report that is
generated by LAA.

Then we need the magic to be able to turn an LAA remark into an LV
remark.  This is done via a new OptimizationRemark ctor.

llvm-svn: 282758
2016-09-29 20:12:18 +00:00
Adam Nemet
08f283bd6f [LV] Port OptimizationRemarkAnalysisFPCommute and
OptimizationRemarkAnalysisAliasing to new streaming API for opt remarks

llvm-svn: 282742
2016-09-29 18:04:47 +00:00
Adam Nemet
831398892e [LV] Convert emitRemark to new opt remark streaming interface
Also renamed the function to emitRemarkWithHints to better reflect what
the function actually does.

llvm-svn: 282723
2016-09-29 16:23:12 +00:00
Adam Nemet
7372bfd21b Remove unnecessary explicit
llvm-svn: 282722
2016-09-29 16:01:34 +00:00
Justin Bogner
f433e1181a IR: Rename the tablegen'd Attributes file to .gen
All of the other tablegen'd include files are named .gen, so it's best
to be consistent.

llvm-svn: 282680
2016-09-29 03:35:19 +00:00
Eric Christopher
8e278c36c0 Remove the default constructor and count variable from the Mangler since
we can just use the size of the DenseMap as a unique counter.

llvm-svn: 282674
2016-09-29 02:03:50 +00:00
Artem Belevich
ed0bd7024b [NVPTX] Added intrinsics for atom.gen.{sys|cta}.* instructions.
These are only available on sm_60+ GPUs.

Differential Revision: https://reviews.llvm.org/D24943

llvm-svn: 282607
2016-09-28 17:25:38 +00:00
Adam Nemet
ec2292c80c [Inliner] Port all opt remarks to new streaming API
llvm-svn: 282559
2016-09-27 23:47:03 +00:00
Adam Nemet
c03a73efe2 Shorten DiagnosticInfoOptimizationRemark* to OptimizationRemark*. NFC
With the new streaming interface, these class names need to be typed a
lot and it's way too looong.

llvm-svn: 282544
2016-09-27 22:19:23 +00:00
Adam Nemet
f602aa8cdd Output optimization remarks in YAML
(Re-committed after moving the template specialization under the yaml
namespace.  GCC was complaining about this.)

This allows various presentation of this data using an external tool.
This was first recommended here[1].

As an example, consider this module:

  1 int foo();
  2 int bar();
  3
  4 int baz() {
  5   return foo() + bar();
  6 }

The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):

  remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
  remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)

Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:

  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 10 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: foo
    - String:  will not be inlined into
    - Caller: baz
  ...
  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 18 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: bar
    - String:  will not be inlined into
    - Caller: baz
  ...

This is a summary of the high-level decisions:

* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:

   ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
                DEBUG_TYPE, "NotInlined", &I)
            << NV("Callee", Callee) << " will not be inlined into "
            << NV("Caller", CS.getCaller()) << setIsVerbose());

NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.

Subsequent patches will update ORE users to use the new streaming API.

* I am using YAML I/O for writing the YAML file.  YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types.  Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.

On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).

* The YAML stream is stored in the LLVM context.

* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".

* As before hotness is computed in the analysis pass instead of
DiganosticInfo.  This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.

[1] https://reviews.llvm.org/D19678#419445

Differential Revision: https://reviews.llvm.org/D24587

llvm-svn: 282539
2016-09-27 20:55:07 +00:00
Adam Nemet
5058aadaf2 Revert "Output optimization remarks in YAML"
This reverts commit r282499.

The GCC bots are failing

llvm-svn: 282503
2016-09-27 16:39:24 +00:00
Adam Nemet
b1d6f940c4 Output optimization remarks in YAML
This allows various presentation of this data using an external tool.
This was first recommended here[1].

As an example, consider this module:

  1 int foo();
  2 int bar();
  3
  4 int baz() {
  5   return foo() + bar();
  6 }

The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):

  remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
  remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)

Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:

  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 10 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: foo
    - String:  will not be inlined into
    - Caller: baz
  ...
  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 18 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: bar
    - String:  will not be inlined into
    - Caller: baz
  ...

This is a summary of the high-level decisions:

* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:

   ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
                DEBUG_TYPE, "NotInlined", &I)
            << NV("Callee", Callee) << " will not be inlined into "
            << NV("Caller", CS.getCaller()) << setIsVerbose());

NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.

Subsequent patches will update ORE users to use the new streaming API.

* I am using YAML I/O for writing the YAML file.  YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types.  Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.

On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).

* The YAML stream is stored in the LLVM context.

* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".

* As before hotness is computed in the analysis pass instead of
DiganosticInfo.  This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.

[1] https://reviews.llvm.org/D19678#419445

Differential Revision: https://reviews.llvm.org/D24587

llvm-svn: 282499
2016-09-27 16:15:16 +00:00
Adam Nemet
e7e8891f55 Sort headers
llvm-svn: 282498
2016-09-27 16:15:11 +00:00
Nemanja Ivanovic
bf003f8d13 [Power9] Builtins for ELF v.2 API conformance - back end portion
This patch corresponds to review:
https://reviews.llvm.org/D24396

This patch adds support for the "vector count trailing zeroes",
"vector compare not equal" and "vector compare not equal or zero instructions"
as well as "scalar count trailing zeroes" instructions. It also changes the
vector negation to use XXLNOR (when VSX is enabled) so as not to increase
register pressure (previously this was done with a splat immediate of all
ones followed by an XXLXOR). This was done because the altivec.h
builtins (patch to follow) use vector negation and the use of an additional
register for the splat immediate is not optimal.

llvm-svn: 282478
2016-09-27 08:42:12 +00:00
Piotr Padlewski
3152e057e1 [thinlto] Basic thinlto fdo heuristic
Summary:
This patch improves thinlto importer
by importing 3x larger functions that are called from hot block.

I compared performance with the trunk on spec, and there
were about 2% on povray and 3.33% on milc. These results seems
to be consistant and match the results Teresa got with her simple
heuristic. Some benchmarks got slower but I think they are just
noisy (mcf, xalancbmki, omnetpp)- running the benchmarks again with
more iterations to confirm. Geomean of all benchmarks including the noisy ones
were about +0.02%.

I see much better improvement on google branch with Easwaran patch
for pgo callsite inlining (the inliner actually inline those big functions)
Over all I see +0.5% improvement, and I get +8.65% on povray.
So I guess we will see much bigger change when Easwaran patch will land
(it depends on new pass manager), but it is still worth putting this to trunk
before it.

Implementation details changes:
- Removed CallsiteCount.
- ProfileCount got replaced by Hotness
- hot-import-multiplier is set to 3.0 for now,
didn't have time to tune it up, but I see that we get most of the interesting
functions with 3, so there is no much performance difference with higher, and
binary size doesn't grow as much as with 10.0.

Reviewers: eraman, mehdi_amini, tejohnson

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D24638

llvm-svn: 282437
2016-09-26 20:37:32 +00:00
Cameron McInally
f21a9082e2 [AVX512] Fix return types on int_x86_avx512_gatherXXX_di intrinsics
The return type should match the pass through vector type.

Differential Revision: https://reviews.llvm.org/D24744

llvm-svn: 282081
2016-09-21 16:06:10 +00:00
Xinliang David Li
7c7f35dd63 [Profile] code refactoring: make getStep a method in base class
llvm-svn: 282002
2016-09-20 19:07:22 +00:00
Xinliang David Li
3da3a5fee4 [Profile] Implement select instruction instrumentation in IR PGO
Differential Revision: http://reviews.llvm.org/D23727

llvm-svn: 281858
2016-09-18 18:34:07 +00:00
Mehdi Amini
3efb00834f Don't create a SymbolTable in Function when the LLVMContext discards value names (NFC)
The ValueSymbolTable is used to detect name conflict and rename
instructions automatically. This is not needed when the value
names are automatically discarded by the LLVMContext.
No functional change intended, just saving a little bit of memory.

This is a recommit of r281806 after fixing the accessor to return
a pointer instead of a reference and updating all the call-sites.

llvm-svn: 281813
2016-09-17 06:00:02 +00:00
Mehdi Amini
5d436ed7a6 Revert "Don't create a SymbolTable in Function when the LLVMContext discards value names (NFC)"
This reverts commit r281806. It introduces undefined behavior as an
API is returning a reference to the Symtab

llvm-svn: 281808
2016-09-17 04:36:46 +00:00
Mehdi Amini
58f35cd01a Don't create a SymbolTable in Function when the LLVMContext discards value names (NFC)
The ValueSymbolTable is used to detect name conflict and rename
instructions automatically. This is not needed when the value
names are automatically discarded by the LLVMContext.
No functional change intended, just saving a little bit of memory.

llvm-svn: 281806
2016-09-17 03:39:01 +00:00
Dehao Chen
a2d8fe8a1b Change extractProfMetadata and extractProfTotalWeight to const member function.
llvm-svn: 281760
2016-09-16 18:27:20 +00:00
Mehdi Amini
62ba5df396 Fix auto-upgrade of TBAA tags in Bitcode Reader
If TBAA is on an intrinsic and it gets upgraded, it'll delete the call
instruction that we collected in a vector. Even if we were to use
WeakVH, it'll drop the TBAA and we'll hit the assert on the upgrade
path.

r263673 gave a shot to make sure the TBAA upgrade happens before
intrinsics upgrade, but failed to account for all cases.

Instead of collecting instructions in a vector, this patch makes it
just upgrade the TBAA on the fly, because metadata are always
already loaded at this point.

Differential Revision: https://reviews.llvm.org/D24533

llvm-svn: 281549
2016-09-14 22:29:59 +00:00
Craig Topper
33e6517c79 [X86] Remove the VCVTSI2SD32 with rounding intrinsic. It's not used by clang and not needed since 32-bit integer to double is always exact.
llvm-svn: 281442
2016-09-14 06:27:46 +00:00
Craig Topper
8fcf357de7 [X86] Remove masked shufpd/shufps intrinsics and autoupgrade to native vector shuffles. They were removed from clang previously but accidentally left in the backend.
llvm-svn: 281300
2016-09-13 07:40:53 +00:00
Craig Topper
9937a98645 [X86] Remove some dead intrinsics. They aren't implemented and clang doesn't reference them.
llvm-svn: 281299
2016-09-13 07:40:48 +00:00
Peter Collingbourne
a9fbb2813b DebugInfo: New metadata representation for global variables.
This patch reverses the edge from DIGlobalVariable to GlobalVariable.
This will allow us to more easily preserve debug info metadata when
manipulating global variables.

Fixes PR30362. A program for upgrading test cases is attached to that
bug.

Differential Revision: http://reviews.llvm.org/D20147

llvm-svn: 281284
2016-09-13 01:12:59 +00:00
Adam Nemet
7881c692f8 [OptDiag] Add getHotness accessor
llvm-svn: 281274
2016-09-12 23:46:34 +00:00
Duncan P. N. Exon Smith
9c0e21dde7 ADT: Remove ilist_iterator::reset(), NFC
ilist_iterator::reset was unnecessary API, and wasn't any clearer (or
safer) at the call site than constructing a temporary and assigning it
to the iterator.

llvm-svn: 281175
2016-09-11 20:47:27 +00:00
Arnold Schwaighofer
8781da1527 Add an isSwiftError predicate to Value
llvm-svn: 281143
2016-09-10 18:14:54 +00:00
Wei Ding
876390a884 AMDGPU : Fix mqsad_u32_u8 instruction incorrect data type.
Differential Revision: http://reviews.llvm.org/D23700

llvm-svn: 281081
2016-09-09 19:31:51 +00:00
Amaury Sechet
f1273caa0c Rationalise the attribute getter/setter methods on Function and CallSite.
Summary:
While woring on mapping attributes in the C API, it clearly appeared that the recent changes in the API on the C++ side left Function and Call/Invoke with an attribute API that grew in an ad hoc manner. This makes it difficult to work with it, because one doesn't know which overloads exists and which do not.

Make sure that getter/setter function exists for both enum and string version. Remove inconsistent getter/setter, unless they have many callsites.

This should make it easier to work with attributes in the future.

This doesn't change how attribute works.

Reviewers: bkramer, whitequark, mehdi_amini, void

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D21514

llvm-svn: 281019
2016-09-09 04:50:38 +00:00
Peter Collingbourne
8ad73b74e3 IR: Remove Value::intersectOptionalDataWith, replace all calls with calls to Instruction::andIRFlags.
The two functions are functionally equivalent.

Differential Revision: https://reviews.llvm.org/D22830

llvm-svn: 280884
2016-09-07 23:39:04 +00:00
Victor Leschuk
7c3307dca6 Fix comment formatting for DebugInfoFlags.def
llvm-svn: 280722
2016-09-06 17:22:48 +00:00
Leny Kholodov
f831870521 Formatting with clang-format patch r280700
llvm-svn: 280716
2016-09-06 17:03:02 +00:00
Leny Kholodov
94af1f2591 DebugInfo: use strongly typed enum for debug info flags
Use ADT/BitmaskEnum for DINode::DIFlags for the following purposes:

Get rid of unsigned int for flags to avoid problems on platforms with sizeof(int) < 4
Flags are now strongly typed
Patch by: Victor Leschuk <vleschuk@gmail.com>

Differential Revision: https://reviews.llvm.org/D23766

llvm-svn: 280700
2016-09-06 10:46:28 +00:00
Mehdi Amini
0fd43d38ae Revert "DebugInfo: use strongly typed enum for debug info flags"
This reverts commit r280686, bots are broken.

llvm-svn: 280688
2016-09-06 03:26:37 +00:00
Mehdi Amini
c5f4f325a8 DebugInfo: use strongly typed enum for debug info flags
Use ADT/BitmaskEnum for DINode::DIFlags for the following purposes:
    * Get rid of unsigned int for flags to avoid problems on platforms with sizeof(int) < 4
    * Flags are now strongly typed

Patch by: Victor Leschuk <vleschuk@gmail.com>

Differential Revision: https://reviews.llvm.org/D23766

llvm-svn: 280686
2016-09-06 03:14:06 +00:00
Craig Topper
e6bea68a97 [AVX-512] Remove 128-bit and 256-bit masked floating point add/sub/mul/div intrinsics and upgrade to native IR.
llvm-svn: 280633
2016-09-04 18:13:33 +00:00
Craig Topper
7bf68ae691 [AVX-512] Remove masked integer add/sub/mull intrinsics and upgrade to native IR.
llvm-svn: 280611
2016-09-04 02:09:53 +00:00
Xinliang David Li
4abbd8e898 [Profile] preserve branch metadata lowering select in CGP
CGP currently drops select's MD_prof profile data when
generating conditional branch which can lead to bad
code layout. The patch fixes the issue.

Differential Revision: http://reviews.llvm.org/D24169

llvm-svn: 280600
2016-09-03 21:26:36 +00:00
Mehdi Amini
0debeb1eea Fix ThinLTO crash with debug info
Because the recent change about ODR type uniquing in the context,
we can reach types defined in another module during IR linking.
This triggered some assertions in case we IR link without starting
from an empty module. To alleviate that, we can self-map metadata
defined in the destination module so that they won't be visited.

Differential Revision: https://reviews.llvm.org/D23841

llvm-svn: 280599
2016-09-03 21:12:33 +00:00
Duncan P. N. Exon Smith
529ecc85ce ADT: Split out iplist_impl from iplist, NFC
Split out iplist_impl from iplist, and change SymbolTableList to inherit
directly from iplist_impl.  This makes it more straightforward to add
new template paramaters to iplist [*]:
- iplist_impl takes a "base" list that provides the intrusive
  functionality (usually simple_ilist<T>) and a traits class.
- iplist no longer takes a "Traits" template parameter.  It only takes
  the value_type, T, and instantiates iplist_impl with simple_ilist<T>
  and ilist_traits<T>.
- SymbolTableList now inherits from iplist_impl, instead of iplist.

Note for out-of-tree code: if you have an iplist whose second template
parameter was *not* the default (i.e., not ilist_traits<YourT>), you
have three options:
- Stop using a custom traits class, and instead specialize
  ilist_traits<YourT>.  This is the usual thing to do.
- Specialize iplist<YourT> to pass your custom traits class into
  iplist_impl.
- Create your own trivial list type that passes your custom traits class
  into iplist_impl (see SymbolTableList<> for an example).

[*]: The eventual goal is to start tracking a sentinel bit on the
MachineInstr list even when LLVM_ENABLE_ABI_BREAKING_CHECKS is off,
which will enable MachineBasicBlock::reverse_iterator to have normal
list invalidation semantics that matching the new
iplist<>::reverse_iterator from r280032.

llvm-svn: 280569
2016-09-03 02:07:45 +00:00
Duncan P. N. Exon Smith
57584a7379 ADT: Remove external uses of ilist_iterator, NFC
Delete the dead code for Write(ilist_iterator) in the IR Verifier,
inline report(ilist_iterator) at its call sites in the MachineVerifier,
and use simple_ilist<>::iterator in SymbolTableListTraits.

The only remaining reference to ilist_iterator outside of the ilist
implementation is from MachineInstrBundleIterator.  I'll get rid of that
in a follow-up.

llvm-svn: 280565
2016-09-03 01:22:56 +00:00
Sanjay Patel
f50f177563 fix documentation comments; NFC
llvm-svn: 280489
2016-09-02 15:43:25 +00:00
Craig Topper
a8fc395658 [AVX-512] Remove floating point logical operation instrinsics and replace them with native IR.
llvm-svn: 280466
2016-09-02 05:29:17 +00:00
Quentin Colombet
ad7055f8ba [DiagnosticInfo] Add a diagnostic class for the fallback of ISel.
This will be used to warm when we fallback in GlobalISel.

llvm-svn: 280271
2016-08-31 18:42:55 +00:00
Tim Shen
830cc5f6f2 s/static inline/static/ for headers I have changed in r279475. NFC.
llvm-svn: 280257
2016-08-31 16:48:13 +00:00
Philip Reames
686905fa4d [statepoints][experimental] Add support for live-in semantics of values in deopt bundles
This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes.

The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate.

Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.)

My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default.

Differential Revision: https://reviews.llvm.org/D24000

llvm-svn: 280250
2016-08-31 15:12:17 +00:00
Chad Rosier
8f1b069268 Fix comments about IndirectBrInst in Instructions.h
Patch by yo (Chiang, Yi-Yo) <yo@skymizer.com>.
Differential Revision: https://reviews.llvm.org/D23982

llvm-svn: 280241
2016-08-31 13:39:34 +00:00
Daniel Berlin
f56c60c3dd IntrArgMemOnly is only defined (and current AA machinery only sanely supports) pointer arguments, and these intrinsics have vector of pointer arguments. Remove ArgMemOnly until we either have the machinery, define a new attribute, or something similar
llvm-svn: 280143
2016-08-30 19:58:48 +00:00
Duncan P. N. Exon Smith
9df0e64f25 ADT: Split ilist_node_traits into alloc and callback, NFC
Many lists want to override only allocation semantics, or callbacks for
iplist.  Split these up to prevent code duplication.
- Specialize ilist_alloc_traits to change the implementations of
  deleteNode() and createNode().
- One common desire is to do nothing deleteNode() and disable
  createNode().  Specialize ilist_alloc_traits to inherit from
  ilist_noalloc_traits for that behaviour.
- Specialize ilist_callback_traits to use the addNodeToList(),
  removeNodeFromList(), and transferNodesFromList() callbacks.

As a drive-by, add some coverage to the callback-related unit tests.

llvm-svn: 280128
2016-08-30 18:40:47 +00:00
James Y Knight
67397c2b8d Replace incorrect "#ifdef DEBUG" with "#ifndef NDEBUG".
The former is simply wrong -- the code will either never be used or will
always be used, rather than being dependent upon whether it's built with
debug assertions enabled.

The macro DEBUG isn't ever set by the llvm build system. But, the macro
DEBUG(X) is defined (unconditionally) if you happen to include
llvm/Support/Debug.h.

The code in Value.h which was erroneously protected by the #ifdef DEBUG
didn't even compile -- you can't cast<> from an LLVMOpaqueValue
directly. Fortunately, it was never invoked, as Core.cpp included
Value.h before Debug.h.

The conditionalized code in AArch64CollectLOH.cpp was previously always
used, as it includes Debug.h.

llvm-svn: 280056
2016-08-30 03:16:16 +00:00
Duncan P. N. Exon Smith
4e09f9bf86 ADT: Give ilist<T>::reverse_iterator a handle to the current node
Reverse iterators to doubly-linked lists can be simpler (and cheaper)
than std::reverse_iterator.  Make it so.

In particular, change ilist<T>::reverse_iterator so that it is *never*
invalidated unless the node it references is deleted.  This matches the
guarantees of ilist<T>::iterator.

(Note: MachineBasicBlock::iterator is *not* an ilist iterator, but a
MachineInstrBundleIterator<MachineInstr>.  This commit does not change
MachineBasicBlock::reverse_iterator, but it does update
MachineBasicBlock::reverse_instr_iterator.  See note at end of commit
message for details on bundle iterators.)

Given the list (with the Sentinel showing twice for simplicity):

     [Sentinel] <-> A <-> B <-> [Sentinel]

the following is now true:
 1. begin() represents A.
 2. begin() holds the pointer for A.
 3. end() represents [Sentinel].
 4. end() holds the poitner for [Sentinel].
 5. rbegin() represents B.
 6. rbegin() holds the pointer for B.
 7. rend() represents [Sentinel].
 8. rend() holds the pointer for [Sentinel].

The changes are #6 and #8.  Here are some properties from the old
scheme (which used std::reverse_iterator):
- rbegin() held the pointer for [Sentinel] and rend() held the pointer
  for A;
- operator*() cost two dereferences instead of one;
- converting from a valid iterator to its valid reverse_iterator
  involved a confusing increment; and
- "RI++->erase()" left RI invalid.  The unintuitive replacement was
  "RI->erase(), RE = end()".

With vector-like data structures these properties are hard to avoid
(since past-the-beginning is not a valid pointer), and don't impose a
real cost (since there's still only one dereference, and all iterators
are invalidated on erase).  But with lists, this was a poor design.

Specifically, the following code (which obviously works with normal
iterators) now works with ilist::reverse_iterator as well:

    for (auto RI = L.rbegin(), RE = L.rend(); RI != RE;)
      fooThatMightRemoveArgFromList(*RI++);

Converting between iterator and reverse_iterator for the same node uses
the getReverse() function.

    reverse_iterator iterator::getReverse();
    iterator reverse_iterator::getReverse();

Why doesn't iterator <=> reverse_iterator conversion use constructors?

In order to catch and update old code, reverse_iterator does not even
have an explicit conversion from iterator.  It wouldn't be safe because
there would be no reasonable way to catch all the bugs from the changed
semantic (see the changes at call sites that are part of this patch).

Old code used this API:

    std::reverse_iterator::reverse_iterator(iterator);
    iterator std::reverse_iterator::base();

Here's how to update from old code to new (that incorporates the
semantic change), assuming I is an ilist<>::iterator and RI is an
ilist<>::reverse_iterator:

            [Old]         ==>          [New]
    reverse_iterator(I)       (--I).getReverse()
    reverse_iterator(I)         ++I.getReverse()
  --reverse_iterator(I)           I.getReverse()
    reverse_iterator(++I)         I.getReverse()
          RI.base()          (--RI).getReverse()
          RI.base()            ++RI.getReverse()
        --RI.base()              RI.getReverse()
      (++RI).base()              RI.getReverse()
  delete &*RI, RE = end()         delete &*RI++
  RI->erase(), RE = end()         RI++->erase()

=======================================
Note: bundle iterators are out of scope
=======================================

MachineBasicBlock::iterator, also known as
MachineInstrBundleIterator<MachineInstr>, is a wrapper to represent
MachineInstr bundles.  The idea is that each operator++ takes you to the
beginning of the next bundle.  Implementing a sane reverse iterator for
this is harder than ilist.  Here are the options:
- Use std::reverse_iterator<MBB::i>.  Store a handle to the beginning of
  the next bundle.  A call to operator*() runs a loop (usually
  operator--() will be called 1 time, for unbundled instructions).
  Increment/decrement just works.  This is the status quo.
- Store a handle to the final node in the bundle.  A call to operator*()
  still runs a loop, but it iterates one time fewer (usually
  operator--() will be called 0 times, for unbundled instructions).
  Increment/decrement just works.
- Make the ilist_sentinel<MachineInstr> *always* store that it's the
  sentinel (instead of just in asserts mode).  Then the bundle iterator
  can sniff the sentinel bit in operator++().

I initially tried implementing the end() option as part of this commit,
but updating iterator/reverse_iterator conversion call sites was
error-prone.  I have a WIP series of patches that implements the final
option.

llvm-svn: 280032
2016-08-30 00:13:12 +00:00
Gor Nishanov
cc3ff9c81d [Coroutines] Part 9: Add cleanup subfunction.
Summary:
[Coroutines] Part 9: Add cleanup subfunction.

This patch completes coroutine heap allocation elision. Now, the heap elision example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex3.ll)

Intrinsic Changes:
* coro.free gets a token parameter tying it to coro.id to allow reliably discovering all coro.frees associated with a particular coroutine.
* coro.id gets an extra parameter that points back to a coroutine function. This allows to check whether a coro.id describes the enclosing function or it belongs to a different function that was later inlined.

CoroSplit now creates three subfunctions:
# f$resume - resume logic
# f$destroy - cleanup logic, followed by a deallocation code
# f$cleanup - just the cleanup code

CoroElide pass during devirtualization replaces coro.destroy with either f$destroy or f$cleanup depending whether heap elision is performed or not.

Other fixes, improvements:
* Fixed buglet in Shape::buildFrame that was not creating coro.save properly if coroutine has more than one suspend point.

* Switched to using variable width suspend index field (no longer limited to 32 bit index field can be as little as i1 or as large as i<whatever-size_t-is>)

Reviewers: majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23844

llvm-svn: 279971
2016-08-29 14:34:12 +00:00
Changpeng Fang
351c66ab91 AMDGCN/SI: Implement readlane/readfirstlane intrinsics
Summary:
  This patch implements readlane/readfirstlane intrinsics.
TODO: need to define a new register class to consider the case
that the source could be a vector register or M0.

Reviewed by:
  arsenm and tstellarAMD

Differential Revision:
  http://reviews.llvm.org/D22489

llvm-svn: 279660
2016-08-24 20:35:23 +00:00
David Blaikie
22b8e86371 DebugInfo: Add flag to CU to disable emission of inline debug info into the skeleton CU
In cases where .dwo/.dwp files are guaranteed to be available, skipping
the extra online (in the .o file) inline info can save a substantial
amount of space - see the original r221306 for more details there.

llvm-svn: 279650
2016-08-24 18:29:49 +00:00
Chandler Carruth
94942f5c00 [PM] Introduce basic update capabilities to the new PM's CGSCC pass
manager, including both plumbing and logic to handle function pass
updates.

There are three fundamentally tied changes here:
1) Plumbing *some* mechanism for updating the CGSCC pass manager as the
   CG changes while passes are running.
2) Changing the CGSCC pass manager infrastructure to have support for
   the underlying graph to mutate mid-pass run.
3) Actually updating the CG after function passes run.

I can separate them if necessary, but I think its really useful to have
them together as the needs of #3 drove #2, and that in turn drove #1.

The plumbing technique is to extend the "run" method signature with
extra arguments. We provide the call graph that intrinsically is
available as it is the basis of the pass manager's IR units, and an
output parameter that records the results of updating the call graph
during an SCC passes's run. Note that "...UpdateResult" isn't a *great*
name here... suggestions very welcome.

I tried a pretty frustrating number of different data structures and such
for the innards of the update result. Every other one failed for one
reason or another. Sometimes I just couldn't keep the layers of
complexity right in my head. The thing that really worked was to just
directly provide access to the underlying structures used to walk the
call graph so that their updates could be informed by the *particular*
nature of the change to the graph.

The technique for how to make the pass management infrastructure cope
with mutating graphs was also something that took a really, really large
number of iterations to get to a place where I was happy. Here are some
of the considerations that drove the design:

- We operate at three levels within the infrastructure: RefSCC, SCC, and
  Node. In each case, we are working bottom up and so we want to
  continue to iterate on the "lowest" node as the graph changes. Look at
  how we iterate over nodes in an SCC running function passes as those
  function passes mutate the CG. We continue to iterate on the "lowest"
  SCC, which is the one that continues to contain the function just
  processed.

- The call graph structure re-uses SCCs (and RefSCCs) during mutation
  events for the *highest* entry in the resulting new subgraph, not the
  lowest. This means that it is necessary to continually update the
  current SCC or RefSCC as it shifts. This is really surprising and
  subtle, and took a long time for me to work out. I actually tried
  changing the call graph to provide the opposite behavior, and it
  breaks *EVERYTHING*. The graph update algorithms are really deeply
  tied to this particualr pattern.

- When SCCs or RefSCCs are split apart and refined and we continually
  re-pin our processing to the bottom one in the subgraph, we need to
  enqueue the newly formed SCCs and RefSCCs for subsequent processing.
  Queuing them presents a few challenges:
  1) SCCs and RefSCCs use wildly different iteration strategies at
     a high level. We end up needing to converge them on worklist
     approaches that can be extended in order to be able to handle the
     mutations.
  2) The order of the enqueuing need to remain bottom-up post-order so
     that we don't get surprising order of visitation for things like
     the inliner.
  3) We need the worklists to have set semantics so we don't duplicate
     things endlessly. We don't need a *persistent* set though because
     we always keep processing the bottom node!!!! This is super, super
     surprising to me and took a long time to convince myself this is
     correct, but I'm pretty sure it is... Once we sink down to the
     bottom node, we can't re-split out the same node in any way, and
     the postorder of the current queue is fixed and unchanging.
  4) We need to make sure that the "current" SCC or RefSCC actually gets
     enqueued here such that we re-visit it because we continue
     processing a *new*, *bottom* SCC/RefSCC.

- We also need the ability to *skip* SCCs and RefSCCs that get merged
  into a larger component. We even need the ability to skip *nodes* from
  an SCC that are no longer part of that SCC.

This led to the design you see in the patch which uses SetVector-based
worklists. The RefSCC worklist is always empty until an update occurs
and is just used to handle those RefSCCs created by updates as the
others don't even exist yet and are formed on-demand during the
bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and
we push new SCCs onto it and blacklist existing SCCs on it to get the
desired processing.

We then *directly* update these when updating the call graph as I was
never able to find a satisfactory abstraction around the update
strategy.

Finally, we need to compute the updates for function passes. This is
mostly used as an initial customer of all the update mechanisms to drive
their design to at least cover some real set of use cases. There are
a bunch of interesting things that came out of doing this:

- It is really nice to do this a function at a time because that
  function is likely hot in the cache. This means we want even the
  function pass adaptor to support online updates to the call graph!

- To update the call graph after arbitrary function pass mutations is
  quite hard. We have to build a fairly comprehensive set of
  data structures and then process them. Fortunately, some of this code
  is related to the code for building the cal graph in the first place.
  Unfortunately, very little of it makes any sense to share because the
  nature of what we're doing is so very different. I've factored out the
  one part that made sense at least.

- We need to transfer these updates into the various structures for the
  CGSCC pass manager. Once those were more sanely worked out, this
  became relatively easier. But some of those needs necessitated changes
  to the LazyCallGraph interface to make it significantly easier to
  extract the changed SCCs from an update operation.

- We also need to update the CGSCC analysis manager as the shape of the
  graph changes. When an SCC is merged away we need to clear analyses
  associated with it from the analysis manager which we didn't have
  support for in the analysis manager infrsatructure. New SCCs are easy!
  But then we have the case that the original SCC has its shape changed
  but remains in the call graph. There we need to *invalidate* the
  analyses associated with it.

- We also need to invalidate analyses after we *finish* processing an
  SCC. But the analyses we need to invalidate here are *only those for
  the newly updated SCC*!!! Because we only continue processing the
  bottom SCC, if we split SCCs apart the original one gets invalidated
  once when its shape changes and is not processed farther so its
  analyses will be correct. It is the bottom SCC which continues being
  processed and needs to have the "normal" invalidation done based on
  the preserved analyses set.

All of this is mostly background and context for the changes here.

Many thanks to all the reviewers who helped here. Especially Sanjoy who
caught several interesting bugs in the graph algorithms, David, Sean,
and others who all helped with feedback.

Differential Revision: http://reviews.llvm.org/D21464

llvm-svn: 279618
2016-08-24 09:37:14 +00:00
Xinliang David Li
af8f6ba1d3 Fix windows build failure
llvm-svn: 279525
2016-08-23 16:00:54 +00:00
Xinliang David Li
df1f2a779a [Profile] refactor meta data copying/swapping code
Differential Revision: http://reviews.llvm.org/D23619

llvm-svn: 279523
2016-08-23 15:39:03 +00:00
Tim Shen
33e4d80307 [GraphTraits] Replace all NodeType usage with NodeRef
This should finish the GraphTraits migration.

Differential Revision: http://reviews.llvm.org/D23730

llvm-svn: 279475
2016-08-22 21:09:30 +00:00
Duncan P. N. Exon Smith
68b6058de5 ADT: Remove ilist_*sentinel_traits, NFC
Remove all the dead code around ilist_*sentinel_traits.  This is a
follow-up to gutting them as part of r279314 (originally r278974),
staged to prevent broken builds in sub-projects.

Uses were removed from clang in r279457 and lld in r279458.

llvm-svn: 279473
2016-08-22 20:51:00 +00:00
Pete Cooper
27f0062b01 Add comments and an assert to follow-up on r279113. NFC.
Philip commented on r279113 to ask for better comments as to
when to use the different versions of getName.  Its also possible
to assert in the simple case that we aren't an overloaded intrinsic
as those have to use the more capable version of getName.

Thanks for the comments Philip.

llvm-svn: 279466
2016-08-22 20:18:28 +00:00
Chandler Carruth
7dc472d231 [PM] Introduce an abstraction for all the analyses over a particular IR
unit for use in the PreservedAnalyses set.

This doesn't have any important functional change yet but it cleans
things up and makes the analysis substantially more efficient by
avoiding querying through the type erasure for every analysis.

I also think it makes it much easier to reason about how analyses are
preserved when walking across pass managers and across IR unit
abstractions.

Thanks to Sean and Mehdi both for the comments and suggestions.

Differential Revision: https://reviews.llvm.org/D23691

llvm-svn: 279360
2016-08-20 04:57:28 +00:00
Tim Shen
823bde34b3 [GraphTraits] Make nodes_iterator dereference to NodeType*/NodeRef
Currently nodes_iterator may dereference to a NodeType* or a NodeType&. Make them all dereference to NodeType*, which is NodeRef later.

Differential Revision: https://reviews.llvm.org/D23704
Differential Revision: https://reviews.llvm.org/D23705

llvm-svn: 279326
2016-08-19 21:20:13 +00:00
Chandler Carruth
682a2c8bc6 [PM] Re-instate r279227 and r279228 with a fix to the way the templating
was done to hopefully appease MSVC.

As an upside, this also implements the suggestion Sanjoy made in code
review, so two for one! =]

I'll be watching the bots to see if there are still issues.

llvm-svn: 279295
2016-08-19 18:36:06 +00:00
Chandler Carruth
969549cf52 [PM] Revert r279227 and r279228 until I can find someone to help me
solve completely opaque MSVC build errors. It complains about lots of
stuff with this change without givin nearly enough information to even
try to fix.

llvm-svn: 279231
2016-08-19 10:51:55 +00:00
Chandler Carruth
9d65c3131d [PM] Make the the new pass manager support fully generic extra arguments
to run methods, both for transform passes and analysis passes.

This also allows the analysis manager to use a different set of extra
arguments from the pass manager where useful. Consider passes over
analysis produced units of IR like SCCs of the call graph or loops.
Passes of this nature will often want to refer to the analysis result
that was used to compute their IR units (the call graph or LoopInfo).
And for transformations, they may want to communicate special update
information to the outer pass manager. With this change, it becomes
possible to have a run method for a loop pass that looks more like:

  PreservedAnalyses run(Loop &L, AnalysisManager<Loop, LoopInfo> &AM,
                        LoopInfo &LI, LoopUpdateRecord &UR);

And to query the analysis manager like:

    AM.getResult<MyLoopAnalysis>(L, LI);

This makes accessing the known-available analyses convenient and clear,
and it makes passing customized data structures around easy.

My initial use case is going to be in updating the pass manager layers
when the analysis units of IR change. But there are more use cases here
such as having a layer that lets inner passes signal whether certain
additional passes should be run because of particular simplifications
made. Two desires for this have come up in the past: triggering
additional optimization after successfully unrolling loops, and
triggering additional inlining after collapsing indirect calls to direct
calls.

Despite adding this layer of generic extensibility, the *only* change to
existing, simple usage are for places where we forward declare the
AnalysisManager template. We really shouldn't be doing this because of
the fragility exposed here, but currently it makes coping with the
legacy PM code easier.

Differential Revision: http://reviews.llvm.org/D21462

llvm-svn: 279227
2016-08-19 09:45:16 +00:00
Chandler Carruth
ce7d8b0ddc [PM] Try to work-around what appears to be an MSVC SFINAE issue with
r279217 where it fails to select the path that other compilers select.

The workaround won't be as careful to produce an error when an analysis
result is incorrect, but we can rely on non-MSVC builds to catch such
errors it seems and MSVC doesn't seem to support the alternative
techniques.

Hoping this brings the windows bots back to life. If not, will have to
revert all of this.

llvm-svn: 279225
2016-08-19 09:26:00 +00:00
Chandler Carruth
8fa994880c [PM] NFC refactoring: remove the AnalysisManagerBase class, folding it
into the AnalysisManager class template.

Back when I first added this base class there were separate analysis
managers and some plausible reason why it would be a useful factoring of
common code between them. However, after a lot of refactoring cleaning,
we now have *entirely* shared code. The base class was just an arbitrary
division between code in one class template and a separate class
template. It didn't add anything and forced lots of indirection through
"derived_this" for no real gain.

We can always factor a base CRTP class out with common code if there is
ever some *other* analysis manager that wants to share a subset of
logic. But for now, folding things into the primary template is
a non-trivial simplification with no down sides I see. It shortens the
code considerably, removes an unhelpful abstraction, and will make
subsequent patches *dramatically* less complex which enhance the
analysis manager infrastructure to effectively cope with invalidation.

llvm-svn: 279221
2016-08-19 08:31:47 +00:00
Chandler Carruth
9056498e66 [PM] Redesign how the new PM detects whether an analysis result provides
its own invalidate method.

Previously, the technique would assume that if a result didn't have an
invalidate method that didn't exactly match the expected signature it
didn't have one at all. This is in fact not the case. And we had
analyses with incorrect signatures for the invalidate method in the
tree that would be erroneously invalidated in certain cases! Yikes.

Moreover a result might legitimately want to have multiple overloads for
the invalidate method, and if one changes or a new one is needed we
again really want a compiler error. For example in the tree we had not
added the overload for a *function* IR unit to the invalidate routine
for TLI. Doh.

So a new techique for the SFINAE detection here: if the result has *any*
member spelled "invalidate" we turn off the synthesis of a default
version. We don't care if it is a member function or a member variable
or how many overloads there are. Once a result has something by that
name it must provide suitable overloads for the contexts in which it is
used. This seems much more resilient and durable.

Huge props to Richard Smith who helped me figure out how on earth we
could even do this in C++. It took quite some doing. The technique is
remarkably clean however, and merely requires that the analysis results
are not *final* classes. I think that's a requirement we can live with
even if it is a bit odd.

I've fixed the two bad in-tree analysis results. And this will make my
next change which changes the API for invalidate much easier to
validate as correct.

llvm-svn: 279217
2016-08-19 07:49:23 +00:00
Chandler Carruth
f68dd1e089 [PM] Rework the new PM support for building the ModuleSummaryIndex to
directly produce the index as the value type result.

This requires making the index movable which is straightforward. It
greatly simplifies things by allowing us to completely avoid the builder
API and the layers of abstraction inherent there. Instead both pass
managers can directly construct these when run by value. They still
won't be constructed truly eagerly thanks to the optional in the legacy
PM. The code that directly builds the index can also just share a direct
function.

A notable change here is that the result type of the analysis for the
new PM is no longer a reference type. This was really problematic when
making changes to how we handle result types to make our interface
requirements *much* more strict and precise. But I think this is an
overall improvement.

Differential Revision: https://reviews.llvm.org/D23701

llvm-svn: 279216
2016-08-19 07:49:19 +00:00
Wei Ding
ea4d7271dc AMDGPU : Fix QSAD and MQSAD instructions' incorrect data type.
Differential Revision: http://reviews.llvm.org/D23689

llvm-svn: 279126
2016-08-18 19:51:14 +00:00
Pete Cooper
61bc562db4 Add a version of Intrinsic::getName which is more efficient when there are no overloads.
When running 'opt -O2 verify-uselistorder-nodbg.lto.bc', there are 33m allocations.  8.2m
come from std::string allocations in Intrinsic::getName().  Turns out this method only
returns a std::string because it needs to handle overloads, but that is not the common case.

This adds an overload of getName which just returns a StringRef when there are no overloads
and so saves on the allocations.

llvm-svn: 279113
2016-08-18 18:30:54 +00:00
Valery Pykhtin
ef1d950dec [AMDGPU] add s_incperflevel/s_decperflevel intrinsics.
Differential revision: https://reviews.llvm.org/D23666

llvm-svn: 279106
2016-08-18 18:06:20 +00:00
Tim Northover
9c83e8d655 GlobalISel: support irtranslation of icmp instructions.
llvm-svn: 278969
2016-08-17 20:25:25 +00:00
Tim Shen
41fe248b16 [GenericDomTree] Change GenericDomTree to use NodeRef in GraphTraits. NFC.
Summary:
Looking at the implementation, GenericDomTree has more specific
requirements on NodeRef, e.g. NodeRefObject->getParent() should compile,
and NodeRef should be a pointer. We can remove the pointer requirement,
but it seems to have little gain, given the limited use cases.

Also changed GraphTraits<Inverse<Inverse<T>> to be more accurate.

Reviewers: dblaikie, chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D23593

llvm-svn: 278961
2016-08-17 20:01:58 +00:00
Adrian Prantl
5d182b6cce Support the DW_AT_noreturn DWARF flag.
This is used to mark functions with the C++11 [[ noreturn ]] or C11 _Noreturn
attributes.

Patch by Victor Leschuk!

https://reviews.llvm.org/D23167

llvm-svn: 278940
2016-08-17 16:02:43 +00:00
Duncan P. N. Exon Smith
400dc78d11 Scalar: Avoid dereferencing end() in IndVarSimplify
IndVarSimplify::sinkUnusedInvariants calls
BasicBlock::getFirstInsertionPt on the ExitBlock and moves instructions
before it.  This can return end(), so it's not safe to dereference.  Add
an iterator-based overload to Instruction::moveBefore to avoid the UB.

llvm-svn: 278886
2016-08-17 01:54:41 +00:00
Guy Blank
6b866e2b9c [X86] Add xgetbv/xsetbv intrinsics to non-windows platforms
Differential Revision: https://reviews.llvm.org/D21958

llvm-svn: 278782
2016-08-16 06:41:00 +00:00
Tim Shen
e69d467a94 [ADT] Change PostOrderIterator to use NodeRef. NFC.
Reviewers: dblaikie

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23522

llvm-svn: 278752
2016-08-15 21:52:54 +00:00
Wolfgang Pieb
a19b457b8d Local variables whose address is taken and passed on to a call are described
in debug info using their stack slots instead of as an indirection of param reg + 0
offset. This is done by detecting FrameIndexSDNodes in SelectionDAG and generating
FrameIndexDbgValues for them. This ultimately generates DBG_VALUEs with stack
location operands.

Differential Revision: http://reviews.llvm.org/D23283

llvm-svn: 278703
2016-08-15 18:18:26 +00:00
Craig Topper
96f75b2219 [X86] Mark some of the X86 SDNodes as commutative.
llvm-svn: 278653
2016-08-15 04:47:30 +00:00
Mehdi Amini
f448ca43f7 Revert "Revert "Invariant start/end intrinsics overloaded for address space""
This reverts commit 32fc6488e48eafc0ca1bac1bd9cbf0008224d530.

llvm-svn: 278609
2016-08-13 23:31:24 +00:00
Mehdi Amini
2c3de3e169 Revert "Invariant start/end intrinsics overloaded for address space"
This reverts commit r276447.

llvm-svn: 278608
2016-08-13 23:27:32 +00:00
Pete Cooper
c40a63ea39 Add support to paternmatch for simple const Value cases.
Pattern match has some paths which can operate on constant instructions,
but not all.  This adds a version of m_value() to return const Value* and
changes ICmp matching to use auto so that it can match both constant and
mutable instructions.

Tests also included for both mutable and constant ICmpInst matching.

This will be used in a future commit to constify ValueTracking.cpp.

llvm-svn: 278570
2016-08-12 22:16:05 +00:00
Duncan P. N. Exon Smith
e2d742d768 ADT: Remove the ilist_nextprev_traits customization point
No one is using the capability to implement next and prev another way
(since lld stopped doing it in r278468).  Remove the customization point
by moving the API from ilist_nextprev_traits<T> to ilist_node_access.

The old traits class is still useful/necessary API as a target for
friends of node types that inherit privately from ilist_node.
Eventually I plan to either remove it entirely or move the template
parameters to the methods.

(Note: if there's desire to bring back customization of next/prev
pointers in the future (e.g., to pack some bits in there), I think a
traits class like this is an awkward way to accomplish it.  Instead, we
should change ilist<T> to be ilist<ilist_node<T>>, and give an extra
template parameter to ilist_node.)

llvm-svn: 278532
2016-08-12 17:32:34 +00:00
Duncan P. N. Exon Smith
2e8f72520b ADT: Share code for embedded sentinel traits, NFC
Share code for the (mostly problematic) embedded sentinel traits.
- Move the LLVM_NO_SANITIZE("object-size") attribute to
  ilist_half_embedded_sentinel_traits and ilist_embedded_sentinel_traits
  (previously it spread throughout the code duplication).
- Add an ilist_full_embedded_sentinel_traits which has no UB (but has
  the downside of storing the complete node).
- Replace all the custom sentinel traits in LLVM with a declaration of
  ilist_sentinel_traits that inherits from one of the embedded sentinel
  traits classes.

There are still custom sentinel traits in other LLVM subprojects.  I'll
remove those in a follow-up.

Nothing at all should be changing here, this is just rearranging code.
Note that the final goal here is to remove the sentinel traits
altogether, settling on the memory layout of
ilist_half_embedded_sentinel_traits without the UB.  This intermediate
step moves the logic into ilist.h.

llvm-svn: 278513
2016-08-12 15:00:55 +00:00
Gor Nishanov
37bbef4980 [Coroutines]: Part6b: Add coro.id intrinsic.
Summary:
1. Make coroutine representation more robust against optimization that may duplicate instruction by introducing coro.id intrinsics that returns a token that will get fed into coro.alloc and coro.begin. Due to coro.id returning a token, it won't get duplicated and can be used as reliable indicator of coroutine identify when a particular coroutine call gets inlined.
2. Move last three arguments of coro.begin into coro.id as they will be shared if coro.begin will get duplicated.
3. doc + test + code updated to support the new intrinsic.

Reviewers: mehdi_amini, majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23412

llvm-svn: 278481
2016-08-12 05:45:49 +00:00
David Majnemer
319d420e44 Use the range variant of find/find_if instead of unpacking begin/end
If the result of the find is only used to compare against end(), just
use is_contained instead.

No functionality change is intended.

llvm-svn: 278469
2016-08-12 03:55:06 +00:00
Tim Shen
f3b1df95f7 [ADT] Migrate DepthFirstIterator to use NodeRef
Summary:
Notice that the data layout is changed: instead of using
std::pair<PointerIntPair<NodeType*, 1>, ChildItTy>, now use
std::pair<NodeRef, Optional<ChildItTy>>.

A NFC but worth noticing change is operator==(), since we only compare
an iterator against end(), it's better to put an assert there and make
people noticed when it fails.

Reviewers: dblaikie, chandlerc

Subscribers: mzolotukhin, llvm-commits

Differential Revision: https://reviews.llvm.org/D23146

llvm-svn: 278437
2016-08-11 22:36:16 +00:00
David Majnemer
85242fb9f9 Use the range variant of find instead of unpacking begin/end
If the result of the find is only used to compare against end(), just
use is_contained instead.

No functionality change is intended.

llvm-svn: 278433
2016-08-11 22:21:41 +00:00
Piotr Padlewski
2ede910872 Don't import variadic functions
Summary:
This patch adds IsVariadicFunction bit to summary in order
to not import variadic functions. Inliner doesn't inline
variadic functions because it is hard to reason about it.

This one small fix improves Importer by about 16%
(going from 86% to 100% of imported functions that are
inlined anywhere)
on some spec benchmarks like 'int' and others.

Reviewers: eraman, mehdi_amini, tejohnson

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23339

llvm-svn: 278432
2016-08-11 22:13:57 +00:00
Wei Ding
f99a9aad71 AMDGPU : Add intrinsic for instruction v_cvt_pk_u8_f32
Differential Revision: http://reviews.llvm.org/D23336

llvm-svn: 278403
2016-08-11 20:34:48 +00:00
Wei Ding
afed1ab282 AMDGPU : Add LLVM intrinsics for SAD related instructions.
Differential Revision: http://reviews.llvm.org/D23133

llvm-svn: 278354
2016-08-11 16:33:53 +00:00
Changpeng Fang
179532ade9 AMDGPU/SI: Implement amdgcn image intrinsics with sampler
Summary:
  This patch define and implement amdgcn image intrinsics with sampler.

    1. define vdata type to be llvm_anyfloat_ty, address type to be llvm_anyfloat_ty,
       and rsrc type to be llvm_anyint_ty. As a result, we expect the intrinsics name
       to have three suffixes to overload each of these three types;

    2. D128 as well as two other flags are implied in the three types, for example,
       if you use v8i32 as resource type, then r128 is 0!

    3. don't expose TFE flag, and other flags are exposed in the instruction order:
       unrm, glc, slc, lwe and da.

Differential Revision: http://reviews.llvm.org/D22838

Reviewed by:
  arsenm and tstellarAMD

llvm-svn: 278291
2016-08-10 21:15:30 +00:00
Gor Nishanov
04cd924e8d [Coroutines] Part 6: Elide dynamic allocation of a coroutine frame when possible
Summary:
A particular coroutine usage pattern, where a coroutine is created, manipulated and
destroyed by the same calling function, is common for coroutines implementing
RAII idiom and is suitable for allocation elision optimization which avoid
dynamic allocation by storing the coroutine frame as a static `alloca` in its
caller.

coro.free and coro.alloc intrinsics are used to indicate which code needs to be suppressed
when dynamic allocation elision happens:
```
entry:
  %elide = call i8* @llvm.coro.alloc()
  %need.dyn.alloc = icmp ne i8* %elide, null
  br i1 %need.dyn.alloc, label %coro.begin, label %dyn.alloc
dyn.alloc:
  %alloc = call i8* @CustomAlloc(i32 4)
  br label %coro.begin
coro.begin:
  %phi = phi i8* [ %elide, %entry ], [ %alloc, %dyn.alloc ]
  %hdl = call i8* @llvm.coro.begin(i8* %phi, i32 0, i8* null,
                          i8* bitcast ([2 x void (%f.frame*)*]* @f.resumers to i8*))
```
and
```
  %mem = call i8* @llvm.coro.free(i8* %hdl)
  %need.dyn.free = icmp ne i8* %mem, null
  br i1 %need.dyn.free, label %dyn.free, label %if.end
dyn.free:
  call void @CustomFree(i8* %mem)
  br label %if.end
if.end:
  ...
```

If heap allocation elision is performed, we replace coro.alloc with a static alloca on the caller frame and coro.free with null constant.

Also, we need to make sure that if there are any tail calls referencing the coroutine frame, we need to remote tail call attribute, since now coroutine frame lives on the stack.

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
3.Add empty coroutine passes. (https://reviews.llvm.org/D22847)
4.Add coroutine devirtualization + tests.
ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998)
c) Do devirtualization (https://reviews.llvm.org/D23229)
5.Add CGSCC restart trigger + tests. (https://reviews.llvm.org/D23234)
6.Add coroutine heap elision + tests.  <= we are here
7.Add the rest of the logic (split into more patches)

Reviewers: mehdi_amini, majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23245

llvm-svn: 278242
2016-08-10 16:40:39 +00:00
Sean Silva
11e71061b1 Consistently use FunctionAnalysisManager
Besides a general consistently benefit, the extra layer of indirection
allows the mechanical part of https://reviews.llvm.org/D23256 that
requires touching every transformation and analysis to be factored out
cleanly.

Thanks to David for the suggestion.

llvm-svn: 278077
2016-08-09 00:28:15 +00:00
Gor Nishanov
0346aed75a [Coroutines] Part 5: Add CGSCC restart trigger
Summary:
CoroSplit pass processes the coroutine twice. First, it lets it go through
complete IPO optimization pipeline as a single function. It forces restart
of the pipeline by inserting an indirect call to an empty function "coro.devirt.trigger"
which is devirtualized by CoroElide pass that triggers a restart of the pipeline by CGPassManager.
(In later patches, when CoroSplit pass sees the same coroutine the second time, it splits it up,
adds coroutine subfunctions to the SCC to be processed by IPO pipeline.)

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
3.Add empty coroutine passes. (https://reviews.llvm.org/D22847)
4.Add coroutine devirtualization + tests.
ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998)
c) Do devirtualization (https://reviews.llvm.org/D23229)
5.Add CGSCC restart trigger + tests. <= we are here
6.Add coroutine heap elision + tests.
7.Add the rest of the logic (split into more patches)

Reviewers: mehdi_amini, majnemer

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D23234

llvm-svn: 277936
2016-08-06 20:44:39 +00:00
Sanjoy Das
aca3962ed4 [InstCombine] Don't coerce non-integral pointers to integers
Reviewers: majnemer

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D23231

llvm-svn: 277910
2016-08-06 02:58:48 +00:00
Gor Nishanov
d04999a10d Part 4c: Coroutine Devirtualization: Devirtualize coro.resume and coro.destroy.
Summary:
This is the 4c patch of the coroutine series. CoroElide pass now checks if PostSplit coro.begin
is referenced by coro.subfn.addr intrinsics. If so replace coro.subfn.addrs with an appropriate coroutine
subfunction associated with that coro.begin.

Documentation and overview is here: http://llvm.org/docs/Coroutines.html.

Upstreaming sequence (rough plan)
1.Add documentation. (https://reviews.llvm.org/D22603)
2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659)
3.Add empty coroutine passes. (https://reviews.llvm.org/D22847)
4.Add coroutine devirtualization + tests.
ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998)
c) Do devirtualization <= we are here
5.Add CGSCC restart trigger + tests.
6.Add coroutine heap elision + tests.
7.Add the rest of the logic (split into more patches)

Reviewers: majnemer

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D23229

llvm-svn: 277908
2016-08-06 02:16:35 +00:00
Mehdi Amini
76b7eb7e92 Update outdated comments in the new PM internals (NFC)
The analysis manager was made not optional and turned into a
reference instead of a pointer in r272978. Some comments were
still refering to the previous behavior.

llvm-svn: 277857
2016-08-05 19:51:00 +00:00
Sanjay Patel
ad0f97a7d9 fix documentation comments; NFC
llvm-svn: 277853
2016-08-05 19:09:25 +00:00
Justin Bogner
c1fadbec04 IR: Provide an IRBuilder Inserter that calls a callback after insertion
Add a generalized IRBuilderCallbackInserter, which is just given a
callback to execute after insertion. This can be used to get rid of
the custom inserter in InstCombine, which will in turn allow me to add
target specific InstCombineCalls API for intrinsics without horrible
layering violations.

llvm-svn: 277784
2016-08-04 23:41:01 +00:00
David Majnemer
cac831b368 [coroutines] Part 4[ab]: Coroutine Devirtualization: Lower coro.resume and coro.destroy.
This is the forth patch in the coroutine series. CoroEaly pass now lowers coro.resume
and coro.destroy intrinsics by replacing them with an indirect call to an address
returned by coro.subfn.addr intrinsic. This is done so that CGPassManager recognizes
devirtualization when CoroElide replaces a call to coro.subfn.addr with an appropriate
function address.

Patch by Gor Nishanov!

Differential Revision: https://reviews.llvm.org/D22998

llvm-svn: 277765
2016-08-04 20:30:07 +00:00
Chandler Carruth
824dbd8926 [PM] Change the name of the repeating utility to something less
overloaded (and simpler).

Sean rightly pointed out in code review that we've started using
"wrapper pass" as a specific part of the old pass manager, and in fact
it is more applicable there. Here, we really have a pass *template* to
build a repeated pass, so call it that.

llvm-svn: 277689
2016-08-04 03:52:53 +00:00
Chandler Carruth
1632b2fb10 [PM] Add the explicit copy, move, swap, and assignment boilerplate
required by MSVC 2013.

This also makes the repeating pass wrapper assignable. Mildly
unfortunate as it means we can't use a const member for the int, but
that is a really minor invariant to try to preserve at the cost of loss
of regularity of the type. Yet another annoyance of the particular C++
object / move semantic model.

llvm-svn: 277582
2016-08-03 08:16:08 +00:00
Chandler Carruth
9c54dde4a2 [PM] Add a generic 'repeat N times' pass wrapper to the new pass
manager.

While this has some utility for debugging and testing on its own, it is
primarily intended to demonstrate the technique for adding custom
wrappers that can provide more interesting interation behavior in
a nice, orthogonal, and composable layer.

Being able to write these kinds of very dynamic and customized controls
for running passes was one of the motivating use cases of the new pass
manager design, and this gives a hint at how they might look. The actual
logic is tiny here, and most of this is just wiring in the pipeline
parsing so that this can be widely used.

I'm adding this now to show the wiring without a lot of business logic.
This is a precursor patch for showing how a "iterate up to N times as
long as we devirtualize a call" utility can be added as a separable and
composable component along side the CGSCC pass management.

Differential Revision: https://reviews.llvm.org/D22405

llvm-svn: 277581
2016-08-03 07:44:48 +00:00
Tim Shen
4df0ad8244 [ADT] NFC: Generalize GraphTraits requirement of "NodeType *" in interfaces to "NodeRef", and migrate SCCIterator.h to use NodeRef
Summary: By generalize the interface, users are able to inject more flexible Node token into the algorithm, for example, a pair of vector<Node>* and index integer. Currently I only migrated SCCIterator to use NodeRef, but more is coming. It's a NFC.

Reviewers: dblaikie, chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22937

llvm-svn: 277399
2016-08-01 22:32:20 +00:00
Tim Northover
dda86274a2 CodeGen: add new "intrinsic" MachineOperand kind.
This will be used during GlobalISel, where we need a more robust and readable
way to write tests than a simple immediate ID.

llvm-svn: 277209
2016-07-29 20:32:59 +00:00
Justin Lebar
b1ec783712 Revert "Don't invoke getName() from Function::isIntrinsic().", rL276942.
This broke some out-of-tree AMDGPU tests that relied on the old behavior
wherein isIntrinsic() would return true for any function that starts
with "llvm.".  And in general that change will not play nicely with
out-of-tree backends.

llvm-svn: 277087
2016-07-28 23:58:15 +00:00
Sanjoy Das
76a6f34a7e [IR] Introduce a non-integral pointer type
Summary:
This change adds a `ni` specifier in the `datalayout` string to denote
pointers in some given address spaces as "non-integral", and adds some
typing rules around these special pointers.

Reviewers: majnemer, chandlerc, atrick, dberlin, eli.friedman, tstellarAMD, arsenm

Subscribers: arsenm, mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D22488

llvm-svn: 277085
2016-07-28 23:43:38 +00:00
Wei Ding
ce5d57c9f9 AMDGPU : Add intrinsics for compare with the full wavefront result
Differential Revision: http://reviews.llvm.org/D22482

llvm-svn: 276998
2016-07-28 16:42:13 +00:00
Justin Lebar
c1a3abfb94 Don't invoke getName() from Function::isIntrinsic().
Summary:
getName() involves a hashtable lookup, so is expensive given how
frequently isIntrinsic() is called.  (In particular, many users cast to
IntrinsicInstr or one of its subclasses before calling
getIntrinsicID().)

This has an incidental functional change: Before, isIntrinsic() would
return true for any function whose name started with "llvm.", even if it
wasn't properly an intrinsic.  The new behavior seems more correct to
me, because it's strange to say that isIntrinsic() is true, but
getIntrinsicId() returns "not an intrinsic".

Some callers want the old behavior -- they want to know whether the
caller is a recognized intrinsic, or might be one in some other version
of LLVM.  For them, we added Function::hasLLVMReservedName(), which
checks whether the name starts with "llvm.".

This change is good for a 1.5% e2e speedup compiling a large Eigen
benchmark.

Reviewers: bogner

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D22065

llvm-svn: 276942
2016-07-27 23:46:57 +00:00
David Majnemer
466bebafa0 [coroutines] Part 2 of N: Adding Coroutine Intrinsics
This is the second patch in the coroutine series. It adds coroutine
intrinsics and updates intrinsic cost in TargetTransformInfoImpl.h.

Patch by Gor Nishanov!

Differential Revision: https://reviews.llvm.org/D22659

llvm-svn: 276839
2016-07-27 05:12:35 +00:00
Matt Arsenault
d60a4f902f AMDGPU: Add fp legacy instruction intrinsics
This could use some additional optimization work
to use mad/mac legacy.

llvm-svn: 276764
2016-07-26 16:45:45 +00:00
Jan Vesely
4192bbafb1 AMDGPU: Remove read_workdim intrinsic
Differential revision: https://reviews.llvm.org/D22732

llvm-svn: 276682
2016-07-25 20:17:02 +00:00
Anna Thomas
70e110c227 Add invariant start call creation in IRBuilder.NFC
Differential Revision: https://reviews.llvm.org/D22700

llvm-svn: 276471
2016-07-22 20:57:23 +00:00
Anna Thomas
6f5ce86e80 Invariant start/end intrinsics overloaded for address space
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address
space.

With this change, these intrinsics are overloaded for any adddress space
for memory objects
and we can use these llvm invariant intrinsics in non-default address
spaces.

Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)

This overloaded intrinsic is needed for representing final or invariant
memory in managed languages.

Reviewers: apilipenko, reames

Subscribers: llvm-commits
llvm-svn: 276447
2016-07-22 17:49:40 +00:00
Matt Arsenault
dc6f828a36 AMDGPU: Add HSA dispatch id intrinsic
llvm-svn: 276437
2016-07-22 17:01:30 +00:00
Simon Pilgrim
95ed20cecf [X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128 (reapplied)
As reported on PR26235, we don't currently make use of the VBROADCASTF128/VBROADCASTI128 instructions (or the AVX512 equivalents) to load+splat a 128-bit vector to both lanes of a 256-bit vector.

This patch enables lowering from subvector insertion/concatenation patterns and auto-upgrades the llvm.x86.avx.vbroadcastf128.pd.256 / llvm.x86.avx.vbroadcastf128.ps.256 intrinsics to match.

We could possibly investigate using VBROADCASTF128/VBROADCASTI128 to load repeated constants as well (similar to how we already do for scalar broadcasts).

Reapplied with fix for PR28657 - removed intrinsic definitions (clang companion patch to be be submitted shortly).

Differential Revision: https://reviews.llvm.org/D22460

llvm-svn: 276416
2016-07-22 13:58:44 +00:00
Anna Thomas
a6e42b23de Revert "Invariant start/end intrinsics overloaded for address space"
This reverts commit r276316.

llvm-svn: 276320
2016-07-21 19:06:28 +00:00
Anna Thomas
219ef36aa0 Invariant start/end intrinsics overloaded for address space
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address space.

With this change, these intrinsics are overloaded for any adddress space for memory objects
and we can use these llvm invariant intrinsics in non-default address spaces.

Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)

This overloaded intrinsic is needed for representing final or invariant memory in managed languages.

Reviewers: tstellarAMD, reames, apilipenko

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D22519

llvm-svn: 276316
2016-07-21 18:41:44 +00:00
Amaury Sechet
e639831d0c Expose AttributeSetNode, use it to provide aggregate getter for attribute in the C API.
Summary: See D19181 for context.

Reviewers: whitequark, Wallbraker, jyknight, echristo, bkramer, void

Subscribers: mehdi_amini

Differential Revision: http://reviews.llvm.org/D21265

llvm-svn: 276236
2016-07-21 04:25:06 +00:00
Adam Nemet
377d292ea8 [OptDiag,LV] Add hotness attribute to applied-optimization remarks
Test coverage is provided by modifying the function in the FP-math
testcase that we are allowed to vectorize.

llvm-svn: 276223
2016-07-21 01:07:13 +00:00
Adam Nemet
2a94ac8820 [OptDiag,LV] Add hotness attribute to the derived analysis remarks
This includes FPCompute and Aliasing.

Testcase is based on no_fpmath.ll.

llvm-svn: 276211
2016-07-20 23:50:32 +00:00
Adam Nemet
46bb1fa09e [OptDiag,LV] Add hotness attribute to analysis remarks
The earlier change added hotness attribute to missed-optimization
remarks.  This follows up with the analysis remarks (the ones explaining
the reason for the missed optimization).

llvm-svn: 276192
2016-07-20 21:44:26 +00:00
Simon Pilgrim
e2f3b489b8 [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

A companion clang patch is at D22105

Differential Revision: https://reviews.llvm.org/D22106

llvm-svn: 275981
2016-07-19 15:07:43 +00:00
Tobias Grosser
fc9b1f99e6 Style: drop some unnecessary ';' [NFC]
llvm-svn: 275963
2016-07-19 09:01:46 +00:00
Matt Arsenault
f3c657912f AMDGPU: Add intrinsic for s_flbit_i32/v_ffbh_i32
llvm-svn: 275871
2016-07-18 18:35:05 +00:00
Matt Arsenault
924a52e452 AMDGPU/R600: Replace barrier intrinsics
llvm-svn: 275870
2016-07-18 18:34:59 +00:00
Simon Dardis
2e9a461206 [inlineasm] Propagate operand constraints to the backend
When SelectionDAGISel transforms a node representing an inline asm
block, memory constraint information is not preserved. This can cause
constraints to be broken when a memory offset is of the form:

offset + frame index

when the frame is resolved.

By propagating the constraints all the way to the backend, targets can
enforce memory operands of inline assembly to conform to their constraints.

For MIPSR6, some instructions had their offsets reduced to 9 bits from
16 bits such as ll/sc. This becomes problematic when using inline assembly
to perform atomic operations, as an offset can generated that is too big to
encode in the instruction.

Reviewers: dsanders, vkalintris

Differential Review: https://reviews.llvm.org/D21615

llvm-svn: 275786
2016-07-18 13:17:31 +00:00
Teresa Johnson
ddb22b2673 [ThinLTO] Perform profile-guided indirect call promotion
Summary:
To enable profile-guided indirect call promotion in ThinLTO mode, we
simply add call graph edges for each profitable target from the profile
to the summaries, then the summary-guided importing will consider the
callee for importing as usual.

Also we need to enable the indirect call promotion pass creation in the
PassManagerBuilder when PerformThinLTO=true (we are in the ThinLTO
backend), so that the newly imported functions are considered for
promotion in the backends.

The IC promotion profiles refer to callees by GUID, which required
adding GUIDs to the per-module VST in bitcode (and assigning them
valueIds similar to how they are assigned valueIds in the combined
index).

Reviewers: mehdi_amini, xur

Subscribers: mehdi_amini, davidxl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21932

llvm-svn: 275707
2016-07-17 14:47:01 +00:00
Matt Arsenault
6779483bea AMDGPU: Remove AMDGPU.ldexp
llvm-svn: 275618
2016-07-15 21:26:56 +00:00
Matt Arsenault
3bfc10ac74 AMDGPU: Remove legacy rsq.clamped intrinsic
Mesa still has a use of llvm.AMDGPU.rsq.f64 remaining.

Also fix mismatch with non-IEEE rsq selecting to IEEE rsq.

llvm-svn: 275617
2016-07-15 21:26:52 +00:00