1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-22 18:54:02 +01:00
llvm-mirror/test/tools/llvm-mca/X86/option-all-views-1.s
Andrea Di Biagio 88551367a3 [MCA][Bottleneck Analysis] Teach how to compute a critical sequence of instructions based on the simulation.
This patch teaches the bottleneck analysis how to identify and print the most
expensive sequence of instructions according to the simulation. Fixes PR37494.

The goal is to help users identify the sequence of instruction which is most
critical for performance.

A dependency graph is internally used by the bottleneck analysis to describe
data dependencies and processor resource interferences between instructions.

There is one node in the graph for every instruction in the input assembly
sequence. The number of nodes in the graph is independent from the number of
iterations simulated by the tool. It means that a single node of the graph
represents all the possible instances of a same instruction contributed by the
simulated iterations.

Edges are dynamically "discovered" by the bottleneck analysis by observing
instruction state transitions and "backend pressure increase" events generated
by the Execute stage. Information from the events is used to identify critical
dependencies, and materialize edges in the graph. A dependency edge is uniquely
identified by a pair of node identifiers plus an instance of struct
DependencyEdge::Dependency (which provides more details about the actual
dependency kind).

The bottleneck analysis internally ranks dependency edges based on their impact
on the runtime (see field DependencyEdge::Dependency::Cost). To this end, each
edge of the graph has an associated cost. By default, the cost of an edge is a
function of its latency (in cycles). In practice, the cost of an edge is also a
function of the number of cycles where the dependency has been seen as
'contributing to backend pressure increases'. The idea is that the higher the
cost of an edge, the higher is the impact of the dependency on performance. To
put it in another way, the cost of an edge is a measure of criticality for
performance.

Note how a same edge may be found in multiple iteration of the simulated loop.
The logic that adds new edges to the graph checks if an equivalent dependency
already exists (duplicate edges are not allowed). If an equivalent dependency
edge is found, field DependencyEdge::Frequency of that edge is incremented by
one, and the new cost is cumulatively added to the existing edge cost.

At the end of simulation, costs are propagated to nodes through the edges of the
graph. The goal is to identify a critical sequence from a node of the root-set
(composed by node of the graph with no predecessors) to a 'sink node' with no
successors.  Note that the graph is intentionally kept acyclic to minimize the
complexity of the critical sequence computation algorithm (complexity is
currently linear in the number of nodes in the graph).

The critical path is finally computed as a sequence of dependency edges. For
edges describing processor resource interferences, the view also prints a
so-called "interference probability" value (by dividing field
DependencyEdge::Frequency by the total number of iterations).

Examples of critical sequence computations can be found in tests added/modified
by this patch.

On output streams that support colored output, instructions from the critical
sequence are rendered with a different color.

Strictly speaking the analysis conducted by the bottleneck analysis view is not
a critical path analysis. The cost of an edge doesn't only depend on the
dependency latency. More importantly, the cost of a same edge may be computed
differently by different iterations.

The number of dependencies is discovered dynamically based on the events
generated by the simulator. However, their number is not fixed. This is
especially true for edges that model processor resource interferences; an
interference may not occur in every iteration. For that reason, it makes sense
to also print out a "probability of interference".

By construction, the accuracy of this analysis (as always) is strongly dependent
on the simulation (and therefore the quality of the information available in the
scheduling model).

That being said, the critical sequence effectively identifies a performance
criticality. Instructions from that sequence are expected to have a very big
impact on performance. So, users can take advantage of this information to focus
their attention on specific interactions between instructions.
In my experience, it works quite well in practice, and produces useful
output (in a reasonable amount time).

Differential Revision: https://reviews.llvm.org/D63543

llvm-svn: 364045
2019-06-21 13:32:54 +00:00

152 lines
7.4 KiB
ArmAsm

# NOTE: Assertions have been autogenerated by utils/update_mca_test_checks.py
# RUN: llvm-mca -mtriple=x86_64-unknown-unknown -mcpu=btver2 -all-views < %s | FileCheck %s -check-prefix=DEFAULTREPORT -check-prefix=FULLREPORT
# RUN: llvm-mca -mtriple=x86_64-unknown-unknown -mcpu=btver2 -all-views=true < %s | FileCheck %s -check-prefix=DEFAULTREPORT -check-prefix=FULLREPORT
# RUN: llvm-mca -mtriple=x86_64-unknown-unknown -mcpu=btver2 -all-views=false < %s | FileCheck %s -check-prefix=NOREPORT -allow-empty
# RUN: llvm-mca -mtriple=x86_64-unknown-unknown -mcpu=btver2 < %s | FileCheck %s -check-prefix=DEFAULTREPORT
add %eax, %eax
# NOREPORT-NOT: {{.}}
# DEFAULTREPORT: Iterations: 100
# DEFAULTREPORT-NEXT: Instructions: 100
# DEFAULTREPORT-NEXT: Total Cycles: 103
# DEFAULTREPORT-NEXT: Total uOps: 100
# DEFAULTREPORT: Dispatch Width: 2
# DEFAULTREPORT-NEXT: uOps Per Cycle: 0.97
# DEFAULTREPORT-NEXT: IPC: 0.97
# DEFAULTREPORT-NEXT: Block RThroughput: 0.5
# FULLREPORT: Cycles with backend pressure increase [ 76.70% ]
# FULLREPORT-NEXT: Throughput Bottlenecks:
# FULLREPORT-NEXT: Resource Pressure [ 0.00% ]
# FULLREPORT-NEXT: Data Dependencies: [ 76.70% ]
# FULLREPORT-NEXT: - Register Dependencies [ 76.70% ]
# FULLREPORT-NEXT: - Memory Dependencies [ 0.00% ]
# FULLREPORT: Critical sequence based on the simulation:
# FULLREPORT: Instruction Dependency Information
# FULLREPORT-NEXT: +----< 0. addl %eax, %eax
# FULLREPORT-NEXT: |
# FULLREPORT-NEXT: | < loop carried >
# FULLREPORT-NEXT: |
# FULLREPORT-NEXT: +----> 0. addl %eax, %eax ## REGISTER dependency: %eax
# FULLREPORT-NEXT: |
# FULLREPORT-NEXT: | < loop carried >
# FULLREPORT-NEXT: |
# FULLREPORT-NEXT: +----> 0. addl %eax, %eax ## REGISTER dependency: %eax
# DEFAULTREPORT: Instruction Info:
# DEFAULTREPORT-NEXT: [1]: #uOps
# DEFAULTREPORT-NEXT: [2]: Latency
# DEFAULTREPORT-NEXT: [3]: RThroughput
# DEFAULTREPORT-NEXT: [4]: MayLoad
# DEFAULTREPORT-NEXT: [5]: MayStore
# DEFAULTREPORT-NEXT: [6]: HasSideEffects (U)
# DEFAULTREPORT: [1] [2] [3] [4] [5] [6] Instructions:
# DEFAULTREPORT-NEXT: 1 1 0.50 addl %eax, %eax
# FULLREPORT: Dynamic Dispatch Stall Cycles:
# FULLREPORT-NEXT: RAT - Register unavailable: 0
# FULLREPORT-NEXT: RCU - Retire tokens unavailable: 0
# FULLREPORT-NEXT: SCHEDQ - Scheduler full: 61 (59.2%)
# FULLREPORT-NEXT: LQ - Load queue full: 0
# FULLREPORT-NEXT: SQ - Store queue full: 0
# FULLREPORT-NEXT: GROUP - Static restrictions on the dispatch group: 0
# FULLREPORT: Dispatch Logic - number of cycles where we saw N micro opcodes dispatched:
# FULLREPORT-NEXT: [# dispatched], [# cycles]
# FULLREPORT-NEXT: 0, 22 (21.4%)
# FULLREPORT-NEXT: 1, 62 (60.2%)
# FULLREPORT-NEXT: 2, 19 (18.4%)
# FULLREPORT: Schedulers - number of cycles where we saw N micro opcodes issued:
# FULLREPORT-NEXT: [# issued], [# cycles]
# FULLREPORT-NEXT: 0, 3 (2.9%)
# FULLREPORT-NEXT: 1, 100 (97.1%)
# FULLREPORT: Scheduler's queue usage:
# FULLREPORT-NEXT: [1] Resource name.
# FULLREPORT-NEXT: [2] Average number of used buffer entries.
# FULLREPORT-NEXT: [3] Maximum number of used buffer entries.
# FULLREPORT-NEXT: [4] Total number of buffer entries.
# FULLREPORT: [1] [2] [3] [4]
# FULLREPORT-NEXT: JALU01 15 20 20
# FULLREPORT-NEXT: JFPU01 0 0 18
# FULLREPORT-NEXT: JLSAGU 0 0 12
# FULLREPORT: Retire Control Unit - number of cycles where we saw N instructions retired:
# FULLREPORT-NEXT: [# retired], [# cycles]
# FULLREPORT-NEXT: 0, 3 (2.9%)
# FULLREPORT-NEXT: 1, 100 (97.1%)
# FULLREPORT: Total ROB Entries: 64
# FULLREPORT-NEXT: Max Used ROB Entries: 22 ( 34.4% )
# FULLREPORT-NEXT: Average Used ROB Entries per cy: 17 ( 26.6% )
# FULLREPORT: Register File statistics:
# FULLREPORT-NEXT: Total number of mappings created: 200
# FULLREPORT-NEXT: Max number of mappings used: 44
# FULLREPORT: * Register File #1 -- JFpuPRF:
# FULLREPORT-NEXT: Number of physical registers: 72
# FULLREPORT-NEXT: Total number of mappings created: 0
# FULLREPORT-NEXT: Max number of mappings used: 0
# FULLREPORT: * Register File #2 -- JIntegerPRF:
# FULLREPORT-NEXT: Number of physical registers: 64
# FULLREPORT-NEXT: Total number of mappings created: 200
# FULLREPORT-NEXT: Max number of mappings used: 44
# DEFAULTREPORT: Resources:
# DEFAULTREPORT-NEXT: [0] - JALU0
# DEFAULTREPORT-NEXT: [1] - JALU1
# DEFAULTREPORT-NEXT: [2] - JDiv
# DEFAULTREPORT-NEXT: [3] - JFPA
# DEFAULTREPORT-NEXT: [4] - JFPM
# DEFAULTREPORT-NEXT: [5] - JFPU0
# DEFAULTREPORT-NEXT: [6] - JFPU1
# DEFAULTREPORT-NEXT: [7] - JLAGU
# DEFAULTREPORT-NEXT: [8] - JMul
# DEFAULTREPORT-NEXT: [9] - JSAGU
# DEFAULTREPORT-NEXT: [10] - JSTC
# DEFAULTREPORT-NEXT: [11] - JVALU0
# DEFAULTREPORT-NEXT: [12] - JVALU1
# DEFAULTREPORT-NEXT: [13] - JVIMUL
# DEFAULTREPORT: Resource pressure per iteration:
# DEFAULTREPORT-NEXT: [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
# DEFAULTREPORT-NEXT: 0.50 0.50 - - - - - - - - - - - -
# DEFAULTREPORT: Resource pressure by instruction:
# DEFAULTREPORT-NEXT: [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Instructions:
# DEFAULTREPORT-NEXT: 0.50 0.50 - - - - - - - - - - - - addl %eax, %eax
# FULLREPORT: Timeline view:
# FULLREPORT-NEXT: 012
# FULLREPORT-NEXT: Index 0123456789
# FULLREPORT: [0,0] DeER . . . addl %eax, %eax
# FULLREPORT-NEXT: [1,0] D=eER. . . addl %eax, %eax
# FULLREPORT-NEXT: [2,0] .D=eER . . addl %eax, %eax
# FULLREPORT-NEXT: [3,0] .D==eER . . addl %eax, %eax
# FULLREPORT-NEXT: [4,0] . D==eER . . addl %eax, %eax
# FULLREPORT-NEXT: [5,0] . D===eER . . addl %eax, %eax
# FULLREPORT-NEXT: [6,0] . D===eER. . addl %eax, %eax
# FULLREPORT-NEXT: [7,0] . D====eER . addl %eax, %eax
# FULLREPORT-NEXT: [8,0] . D====eER. addl %eax, %eax
# FULLREPORT-NEXT: [9,0] . D=====eER addl %eax, %eax
# FULLREPORT: Average Wait times (based on the timeline view):
# FULLREPORT-NEXT: [0]: Executions
# FULLREPORT-NEXT: [1]: Average time spent waiting in a scheduler's queue
# FULLREPORT-NEXT: [2]: Average time spent waiting in a scheduler's queue while ready
# FULLREPORT-NEXT: [3]: Average time elapsed from WB until retire stage
# FULLREPORT: [0] [1] [2] [3]
# FULLREPORT-NEXT: 0. 10 3.5 0.1 0.0 addl %eax, %eax