This change adds hierarchical "time trace" profiling blocks that can be visualized in Chrome, in a "flame chart" style. Each profiling block can have a "detail" string that for example indicates the file being processed, template name being instantiated, function being optimized etc.
This is taken from GitHub PR: https://github.com/aras-p/llvm-project-20170507/pull/2
Patch by Aras Pranckevičius.
Differential Revision: https://reviews.llvm.org/D58675
llvm-svn: 357340
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Implementing -print-before-all/-print-after-all/-filter-print-func support
through PassInstrumentation callbacks.
- PrintIR routines implement printing callbacks.
- StandardInstrumentations class provides a central place to manage all
the "standard" in-tree pass instrumentations. Currently it registers
PrintIR callbacks.
Reviewers: chandlerc, paquette, philip.pfaffe
Differential Revision: https://reviews.llvm.org/D50923
llvm-svn: 342896
This patch adds per-function size information remarks. Previously, passing
-Rpass-analysis=size-info would only give you per-module changes. By adding
the ability to do this per-function, it's easier to see which functions
contributed the most to size changes.
https://reviews.llvm.org/D51467
llvm-svn: 341588
Add a "CouldOnlyImpactOneFunction" bool that's true when we pass in a function.
Just cleaning up a little bit, since I'm going to add in the per-function
remarks soon from D51467.
llvm-svn: 341407
ModuleCount = InstrCount was incorrect. It should have been
InstrCount = ModuleCount. This was making it emit an extra, incorrect remark
for Print Module IR.
The test didn't catch this, because it didn't ensure that the only remark
output was from the desired pass. So, it was possible to have an extra remark
come through and not fail. Updated the test so that we ensure that the last
remark that's output comes from the desired pass. This is done by ensuring
that whatever is being read after the last remark is YAML output rather than
some incorrect garbage.
llvm-svn: 341267
In basic block, loop, and function passes, we already have a function that
we can use to emit optimization remarks. We can use that instead of searching
the module for the first suitable function (that is, one that contains at
least one basic block.)
llvm-svn: 341253
Instead of counting the size of the entire module every time we run a pass,
pass along a delta instead and use that to emit the remark.
This means we only have to use (on average) smaller IR units to calculate
instruction counts. E.g, in a BB pass, we only need to look at the delta of
the BB instead of the delta of the entire module.
6/6
(This improved compile time for size remarks on sqlite3 + O2 significantly)
llvm-svn: 341250
Same as the previous NFC commits in the same vein.
This one introduces a TODO. I'm going to change emitInstrCountChangedRemark
so that it takes in a delta. Since the delta isn't necessary yet, it's not
there. For now, this means that we're calculating the size of the module
twice.
Just done separately to keep the patches small.
4/6
llvm-svn: 341248
Size remarks are slow due to lots of recalculation of the module.
This is similar to the previous commit. Cache the size of the module and
update counts in basic block passes based off a less-expensive delta.
2/6
llvm-svn: 341246
Size remarks are slow due to lots of recalculation of the module.
Pre-calculate the module size and initial function size for a remark. Use
deltas calculated using the less-expensive function IR count to update the
module counts for Function passes.
1/6
llvm-svn: 341245
Moving PassTimingInfo from legacy pass manager code into a separate header.
Making it suitable for both legacy and new pass manager.
Adding a test on -time-passes main functionality.
llvm-svn: 340872
If we deem the analysis pass useless and delete it, we need to make sure we remove it from AnUsageMap. Otherwise we might allocate another pass in the freed memory. This will cause us to reuse the AnalysisUsage from the original pass instead of the new one.
Fixes PR38511
Differential Revision: https://reviews.llvm.org/D50573
llvm-svn: 340210
Summary:
This takes 22ms out of ~20s compiling sqlite3.c because we call it
for every unit of compilation and every pass.
Reviewers: paquette, anemet
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D49586
llvm-svn: 337654
When using clang --save-stats -mllvm -time-passes, both timers and stats
end up in the same json file.
We could end up with things like:
{
"asm-printer.EmittedInsts": 1,
"time.pass.Virtual Register Map.wall": 2.9015541076660156e-04,
"time.pass.Virtual Register Map.user": 2.0500000000000379e-04,
"time.pass.Virtual Register Map.sys": 8.5000000000001741e-05,
}
This patch makes use of the pass argument name (if available) in the
JSON key to end up with things like:
{
"asm-printer.EmittedInsts": 1,
"time.pass.virtregmap.wall": 2.9015541076660156e-04,
"time.pass.virtregmap.user": 2.0500000000000379e-04,
"time.pass.virtregmap.sys": 8.5000000000001741e-05,
}
This also helps avoiding to write another JSON printer to handle all the
cases that we could have in our pass names.
Fixed test instead of adding a new one originally from r334649.
Differential Revision: https://reviews.llvm.org/D48109
llvm-svn: 334657
When using clang --save-stats -mllvm -time-passes, both timers and stats
end up in the same json file.
We could end up with things like:
{
"asm-printer.EmittedInsts": 1,
"time.pass.Virtual Register Map.wall": 2.9015541076660156e-04,
"time.pass.Virtual Register Map.user": 2.0500000000000379e-04,
"time.pass.Virtual Register Map.sys": 8.5000000000001741e-05,
}
This patch makes use of the pass argument name (if available) in the
JSON key to end up with things like:
{
"asm-printer.EmittedInsts": 1,
"time.pass.virtregmap.wall": 2.9015541076660156e-04,
"time.pass.virtregmap.user": 2.0500000000000379e-04,
"time.pass.virtregmap.sys": 8.5000000000001741e-05,
}
This also helps avoiding to write another JSON printer to handle all the
cases that we could have in our pass names.
Differential Revision: https://reviews.llvm.org/D48109
llvm-svn: 334649
Currently the iteration order of OnTheFlyManagers is not deterministic
between executions, which means some of test/Other/opt-*-pipeline.ll
tests fail non-deterministically if an additional on-the-fly manager is
added, as in D45330.
By using MapVector, we always iterate in the insertion order. As we are
not removing elements, there shouldn't be a performance hit, except that
we store an additional vector with the keys.
Reviewers: efriedma, chandlerc, pcc, jhenderson
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D47317
llvm-svn: 333231
The casts in the delta computation for size remarks should have
been static casts. This fixes that.
Thanks to Dávid Bolvanský for pointing that out.
llvm-svn: 332758
This patch adds a remark which tells the user when a pass changes the number of
IR instructions in a module.
It can be enabled by using -Rpass-analysis=size-info.
The point of this is to make it easier to collect statistics on how passes
modify programs in terms of code size. This is similar in concept to timing
reports, but using a remark-based interface makes it easy to diff changes over
multiple compilations of the same program.
By adding functionality like this, we can see
* Which passes impact code size the most
* How passes impact code size at different optimization levels
* Which pass might have contributed the most to an overall code size
regression
The patch lives in the legacy pass manager, but since it's simply emitting
remarks, it shouldn't be too difficult to adapt the functionality to the new
pass manager as well. This can also be adapted to handle MachineInstr counts in
code gen passes.
https://reviews.llvm.org/D38768
llvm-svn: 332739
Summary:
When debugging function passes it happens to be rather useful to dump
the whole module before the transformation and then use this dump
to analyze this single transformation by running it separately
on that particular module state.
Introducing
-print-module-scope
debugging option that forces all the function-level IR dumps
to become whole-module dumps.
This option builds on top of normal dumping controls like
-print-before/after
-filter-print-funcs
The plan is to eventually extend this option to cover other local passes
(at least loop passes) but that should go as a separate change.
Reviewers: sanjoy, weimingz, silvas, fedor.sergeev
Reviewed By: weimingz
Subscribers: apilipenko, skatkov, llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D40245
llvm-svn: 319561
These command line options are not intended for public use, and often
don't even make sense in the context of a particular tool anyway. About
90% of them are already hidden, but when people add new options they
forget to hide them, so if you were to make a brand new tool today, link
against one of LLVM's libraries, and run tool -help you would get a
bunch of junk that doesn't make sense for the tool you're writing.
This patch hides these options. The real solution is to not have
libraries defining command line options, but that's a much larger effort
and not something I'm prepared to take on.
Differential Revision: https://reviews.llvm.org/D40674
llvm-svn: 319505
Summary:
This patch replaces a bunch of iterator-based for loops with range-based
for loops. There are 2 iterator-based loops left in this file in
removeNotPreservedAnalysis, but I think those cannot be replaced by
range-based for loops as they modify the container they are iterating
over.
Unless I missed something, this schould be a NFC and I would appreciate
if someone could have a quick look to confirm that.
Reviewers: chandlerc, pcc, jhenderson
Reviewed By: jhenderson
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D35310
llvm-svn: 307902
The information collected when requested by -time-passes is only printed when
llvm_shutdown is called at the moment. This means that when linking against the LTO
library dynamically and using the C interface, it is not possible to see the timing
information, because llvm_shutdown cannot be called. This change modifies the LTO
code generation functions for both regular LTO and thin LTO to explicitly print and
reset the timing information.
I have tested that this works with our proprietary linker. However, as this relies
on a specific method of building and linking against the LTO library, I'm not sure
how or if this can be tested in the LLVM testsuite.
Reviewed by: mehdi_amini
Differential Revision: https://reviews.llvm.org/D32803
llvm-svn: 303152
Running tests with expensive checks enabled exhibits some problems with
verification of pass results.
First, the pass verification may require results of analysis that are not
available. For instance, verification of loop info requires results of dominator
tree analysis. A pass may be marked as conserving loop info but does not need to
be dependent on DominatorTreePass. When a pass manager tries to verify that loop
info is valid, it needs dominator tree, but corresponding analysis may be
already destroyed as no user of it remained.
Another case is a pass that is skipped. For instance, entities with linkage
available_externally do not need code generation and such passes are skipped for
them. In this case result verification must also be skipped.
To solve these problems this change introduces a special flag to the Pass
structure to mark passes that have valid results. If this flag is reset,
verifications dependent on the pass result are skipped.
Differential Revision: https://reviews.llvm.org/D27190
llvm-svn: 291882
The previously used "names" are rather descriptions (they use multiple
words and contain spaces), use short programming language identifier
like strings for the "names" which should be used when exporting to
machine parseable formats.
Also removed a unused TimerGroup from Hexxagon.
Differential Revision: https://reviews.llvm.org/D25583
llvm-svn: 287369
The core of the change is supposed to be NFC, however it also fixes
what I believe was an undefined behavior when calling:
va_start(ValueArgs, Desc);
with Desc being a StringRef.
Differential Revision: https://reviews.llvm.org/D25342
llvm-svn: 283671
Summary:
This patch implements "-print-funcs" option to support function filtering for IR printing like -print-after-all, -print-before etc.
Examples:
-print-after-all -print-funcs=foo,bar
Reviewers: mcrosier, joker.eph
Subscribers: tejohnson, joker.eph, llvm-commits
Differential Revision: http://reviews.llvm.org/D15776
llvm-svn: 256952
254950 ended up being not NFC. The previous code was overriding the flags for whether an instruction read or wrote memory using the target specific flags returned via TTI. I'd missed this in my refactoring. Since I mistakenly built only x86 and didn't notice the number of unsupported tests, I didn't catch that before the original checkin.
This raises an interesting issue though. Given we have function attributes (i.e. readonly, readnone, argmemonly) which describe the aliasing of intrinsics, why does TTI have this information overriding the instruction definition at all? I see no reason for this, but decided to preserve existing behavior for the moment. The root issue might be that we don't have a "writeonly" attribute.
Original commit message:
[EarlyCSE] Simplify and invert ParseMemoryInst [NFCI]
Restructure ParseMemoryInst - which was introduced to abstract over target specific load and stores instructions - to just query the underlying instructions. In theory, this could be slightly slower than caching the results, but in practice, it's very unlikely to be measurable.
The simple query scheme makes it far easier to understand, and much easier to extend with new queries. Given I'm about to need to add new query types, doing the cleanup first seemed worthwhile.
Do we still believe the target specific intrinsic handling is worthwhile in EarlyCSE? It adds quite a bit of complexity and makes the code harder to read. Being able to delete the abstraction entirely would be wonderful.
llvm-svn: 254957
In 254760, I introduced the usage of a BumpPtrAllocator for the AnalysisUsage instances held by the PassManger. This turns out to have been incorrect since a BumpPtrAllocator does not run the destructors of objects when deallocating memory. Since a few of our SmallVector's had grown beyond their small size, we end up with some leaked memory. We need to use a SpecificBumpPtrAllocator instead.
llvm-svn: 254803
The LegacyPassManager was storing an instance of AnalysisUsage for each instance of each pass. In practice, most instances of a single pass class share the same dependencies. We can't rely on this because passes can (and some do) have dynamic dependencies based on instance options.
We can exploit the likely commonality by uniqueing the usage information after querying the pass, but before storing it into the pass manager. This greatly reduces memory consumption by the AnalysisUsage objects. For a long pass pipeline, I measured a decrease in memory consumption for this storage of about 50%. I have not measured on the default O3 pipeline, but I suspect it will see some benefit as well since many passes are repeated (e.g. InstCombine).
Differential Revision: http://reviews.llvm.org/D14677
llvm-svn: 254760
don't correctly implement the scoping rules of C++11 range based for
loops. This kind of aliasing isn't a good idea anyways (and wasn't
really intended).
llvm-svn: 247241