This is a workaround for clang adding noinline to all functions at
-O0. Previously, we would just add alwaysinline, and the verifier
would complain about having both noinline and alwaysinline. We
currently can't truly codegen this case as a freestanding function, so
override the user forcing noinline.
Currently, all generated lit.site.cfg files contain absolute paths.
This makes it impossible to build on one machine, and then transfer the
build output to another machine for test execution. Being able to do
this is useful for several use cases:
1. When running tests on an ARM machine, it would be possible to build
on a fast x86 machine and then copy build artifacts over after building.
2. It allows running several test suites (clang, llvm, lld) on 3
different machines, reducing test time from sum(each test suite time) to
max(each test suite time).
This patch makes it possible to pass a list of variables that should be
relative in the generated lit.site.cfg.py file to
configure_lit_site_cfg(). The lit.site.cfg.py.in file needs to call
`path()` on these variables, so that the paths are converted to absolute
form at lit start time.
The testers would have to have an LLVM checkout at the same revision,
and the build dir would have to be at the same relative path as on the
builder.
This does not yet cover how to figure out which files to copy from the
builder machine to the tester machines. (One idea is to look at the
`--graphviz=test.dot` output and copy all inputs of the `check-llvm`
target.)
Differential Revision: https://reviews.llvm.org/D77184
bitcast (shuf V, MaskC) --> shuf (bitcast V), MaskC'
We do not attempt this in InstCombine because we do not want to change
types and create new shuffle ops that are potentially not lowered as
well as the original code. Here, we can check the cost model to see if
it is worthwhile.
I've aggressively enabled this transform even if the types are the same
size and/or equal cost because moving the bitcast allows InstCombine to
make further simplifications.
In the motivating cases from PR35454:
https://bugs.llvm.org/show_bug.cgi?id=35454
...this is enough to let instcombine and the backend eliminate the
redundant shuffles, but we probably want to extend VectorCombine to
handle the inverse pattern (shuffle-of-bitcast) to get that
simplification directly in IR.
Differential Revision: https://reviews.llvm.org/D76727
SILoadStoreOptimizer::checkAndPrepareMerge() expects base and
paired instruction to come in order and scans MBB from base to
the paired instruction. An original order can be changed if
there were a dependent instruction in between and base instruction
was moved.
Fixed by bailing the optimization. In theory it might be possible
still to perform a merge by swapping instructions, but on practice
it bails anyway because it finds dependency on that same instruction
which has resulted in the base move.
Differential Revision: https://reviews.llvm.org/D77245
These are versions of a function that regressed with:
rGf2fbdf76d8d0
That particular problem occurs with an instcombine-simplifycfg-instcombine
sequence, but we can show that it exists within instcombine only with
other variations of the pattern.
Uploading output from `git format-patch` fails when version has
more than 2 dots, e.g. git version 2.24.1.windows.2 which is
currently recommended by e.g. GitExtensions or 2.24.1.rc on Linux.
Differential Revision: https://reviews.llvm.org/D72374
This does not change much in code generation, but in rare cases MachineCSE
can figure out that an instruction is redundant after commuting it.
Review: Ulrich Weigand
This reverts commit f2fbdf76d8d07f6a0fbd97825cbc533660d64a37.
As noted in the post-commit thread:
https://reviews.llvm.org/rGf2fbdf76d8d0
...this can obscure a min/max pattern where the components
have extra uses. We can show that the problem is independent
of this change with a slightly modified source example, so
this revert just delays/reduces the need to fix the real
problem.
We need to improve our analysis of negation or -- more
generally -- subtraction using patches like D77230 or D68408.
Summary:
Splitting Knowledge retention into Queries in Analysis and Builder into Transform/Utils
allows Queries and Transform/Utils to use Analysis.
Reviewers: jdoerfert, sstefan1
Reviewed By: jdoerfert
Subscribers: mgorny, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D77171
This patch adds
- New arguments to getMinPrefetchStride() to let the target decide on a
per-loop basis if software prefetching should be done even with a stride
within the limit of the hw prefetcher.
- New TTI hook enableWritePrefetching() to let a target do write prefetching
by default (defaults to false).
- In LoopDataPrefetch:
- A search through the whole loop to gather information before emitting any
prefetches. This way the target can get information via new arguments to
getMinPrefetchStride() and emit prefetches more selectively. Collected
information includes: Does the loop have a call, how many memory
accesses, how many of them are strided, how many prefetches will cover
them. This is NFC to before as long as the target does not change its
definition of getMinPrefetchStride().
- If a previous access to the same exact address was 'read', and the
current one is 'write', make it a 'write' prefetch.
- If two accesses that are covered by the same prefetch do not dominate
each other, put the prefetch in a block that dominates both of them.
- If a ConstantMaxTripCount is less than ItersAhead, then skip the loop.
- A SystemZ implementation of getMinPrefetchStride().
Review: Ulrich Weigand, Michael Kruse
Differential Revision: https://reviews.llvm.org/D70228
Add an option to llvm-dwarfdump to calculate the bytes within
the debug sections. Dump this numbers when using --statistics
option as well.
This is an initial patch (e.g. we should support other units,
since we only support 'bytes' now).
Differential Revision: https://reviews.llvm.org/D74205
This removes some includes/forward-declarations that don't seem to be necessary in the MCA core headers
Based off a cppclean report
Differential Revision: https://reviews.llvm.org/D77073
Having the sync script work for this file seems better
than matching the order of headers in the cmake file.
Also, not having to manually sort the list is nice, even
if gn's automated sorting doesn't quite match the artisanal
order in the cmake file.
This adds MVE vmull patterns, which are conceptually the same as
mul(vmovl, vmovl), and so the tablegen patterns follow the same
structure.
For i8 and i16 this is simple enough, but in the i32 version the
multiply (in 64bits) is illegal, meaning we need to catch the pattern
earlier in a dag fold. Because bitcasts are involved in the zext
versions and the patterns are a little different in little and big
endian. I have only added little endian support in this patch.
Differential Revision: https://reviews.llvm.org/D76740
The unpredictable/hasSideEffects flag is usually inferred by tablegen
from whether the instruction has a tablegen pattern (and that pattern
only has a single output instruction). Now that the MVE intrinsics are
all committed and producing code, the remaining instructions still
marked as unpredictable need to be specially handled. This adds the flag
directly to instructions that need it, notably the V*MLAL instructions
and some of the MOV's.
Differential Revision: https://reviews.llvm.org/D76910
Summary:
In the patch: https://reviews.llvm.org/D42654
De-duplicate utils/update_{llc_,}test_checks.py, Some common part has
been move to common.py. The SCRUB_LOOP_COMMENT_RE has been moved to
common.py, but forgetting to remove from asm.py.
This patch is to remove the redundant SCRUB_LOOP_COMMENT_RE in asm.py
and use common.SCRUB_LOOP_COMMENT_RE.
Summary:
This allows doing `memcmp(p, q, 7)` with 2 loads instead of a call to
memcmp.
This fixes part of PR45147.
Reviewers: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D76133
As pointed out by @thakis, currently CallSiteSplitting bails out after
checking the first PHI node. We should check all PHI nodes, until we
find one where call site splitting is beneficial.
This patch also slightly simplifies the code using BasicBlock::phis().
Reviewers: davidxl, junbuml, thakis
Reviewed By: davidxl
Differential Revision: https://reviews.llvm.org/D77089
When compiling AMDGPUDisassembler.cpp in a stage 1 trunk build with
CMAKE_BUILD_TYPE=RelWithDebInfo LLVM_USE_SANITIZER=Address LiveDebugVariables
accounts for 21.5% wall clock time. This fix reduces that to 1.2% by switching
out a linked list lookup with a map lookup.
Note that the linked list is still used to group UserValues by vreg. The vreg
lookups don't cause any problems in this pathological case.
This is the same idea as D68816, which was reverted, except that it is a less
intrusive fix.
Reviewed By: vsk
Differential Revision: https://reviews.llvm.org/D77226
Different file formats have different naming style for the debug
sections. The method is implemented for ELF, COFF and Mach-O formats.
Differential Revision: https://reviews.llvm.org/D76276
This is a cleanup and normalization patch that also enables reuse with
Flang later on. A follow up will clean up and move the directive ->
clauses mapping.
Differential Revision: https://reviews.llvm.org/D77112
Summary:
As noted in documentation, different repetition modes have different trade-offs:
> .. option:: -repetition-mode=[duplicate|loop]
>
> Specify the repetition mode. `duplicate` will create a large, straight line
> basic block with `num-repetitions` copies of the snippet. `loop` will wrap
> the snippet in a loop which will be run `num-repetitions` times. The `loop`
> mode tends to better hide the effects of the CPU frontend on architectures
> that cache decoded instructions, but consumes a register for counting
> iterations.
Indeed. Example:
>>! In D74156#1873657, @lebedev.ri wrote:
> At least for `CMOV`, i'm seeing wildly different results
> | | Latency | RThroughput |
> | duplicate | 1 | 0.8 |
> | loop | 2 | 0.6 |
> where latency=1 seems correct, and i'd expect the througput to be close to 1/2 (since there are two execution units).
This isn't great for analysis, at least for schedule model development.
As discussed in excruciating detail in
>>! In D74156#1924514, @gchatelet wrote:
>>>! In D74156#1920632, @lebedev.ri wrote:
>> ... did that explanation of the question i'm having made any sense?
>
> Thx for digging in the conversation !
> Ok it makes more sense now.
>
> I discussed it a bit with @courbet:
> - We want the analysis tool to stay simple so we'd rather not make it knowledgeable of the repetition mode.
> - We'd like to still be able to select either repetition mode to dig into special cases
>
> So we could add a third `min` repetition mode that would run both and take the minimum. It could be the default option.
> Would you have some time to look what it would take to add this third mode?
there appears to be an agreement that it is indeed sub-par,
and that we should provide an optional, measurement (not analysis!) -time
way to rectify the situation.
However, the solutions isn't entirely straight-forward.
We can just add an actual 'multiplexer' `MinSnippetRepetitor`, because
if we just concatenate snippets produced by `DuplicateSnippetRepetitor`
and `LoopSnippetRepetitor` and run+measure that, the measurement will
naturally be different from what we'd get by running+measuring
them separately and taking the min.
([[ https://www.wolframalpha.com/input/?i=%28x%2By%29%2F2+%21%3D+min%28x%2C+y%29 | `time(D+L)/2 != min(time(D), time(L))` ]])
Also, it seems best to me to have a single snippet instead of generating
a snippet per repetition mode, since the only difference here is that the
loop repetition mode reserves one register for loop counter.
As far as i can tell, we can either teach `BenchmarkRunner::runConfiguration()`
to produce a single report given multiple repetitors (as in the patch),
or do that one layer higher - don't modify `BenchmarkRunner::runConfiguration()`,
produce multiple reports, don't actually print each one, but aggregate them somehow
and only print the final one.
Initially i've gone ahead with the latter approach, but it didn't look like a natural fit;
the former (as in the diff) does seem like a better fit to me.
There's also a question of the test coverage. It sure currently does work here:
```
$ ./bin/llvm-exegesis --opcode-name=CMOV64rr --mode=inverse_throughput --repetition-mode=duplicate
Check generated assembly with: /usr/bin/objdump -d /tmp/snippet-8fb949.o
---
mode: inverse_throughput
key:
instructions:
- 'CMOV64rr RAX RAX R11 i_0x0'
- 'CMOV64rr RBP RBP R15 i_0x0'
- 'CMOV64rr RBX RBX RBX i_0x0'
- 'CMOV64rr RCX RCX RBX i_0x0'
- 'CMOV64rr RDI RDI R10 i_0x0'
- 'CMOV64rr RDX RDX RAX i_0x0'
- 'CMOV64rr RSI RSI RAX i_0x0'
- 'CMOV64rr R8 R8 R8 i_0x0'
- 'CMOV64rr R9 R9 RDX i_0x0'
- 'CMOV64rr R10 R10 RBX i_0x0'
- 'CMOV64rr R11 R11 R14 i_0x0'
- 'CMOV64rr R12 R12 R9 i_0x0'
- 'CMOV64rr R13 R13 R12 i_0x0'
- 'CMOV64rr R14 R14 R15 i_0x0'
- 'CMOV64rr R15 R15 R13 i_0x0'
config: ''
register_initial_values:
- 'RAX=0x0'
- 'R11=0x0'
- 'EFLAGS=0x0'
- 'RBP=0x0'
- 'R15=0x0'
- 'RBX=0x0'
- 'RCX=0x0'
- 'RDI=0x0'
- 'R10=0x0'
- 'RDX=0x0'
- 'RSI=0x0'
- 'R8=0x0'
- 'R9=0x0'
- 'R14=0x0'
- 'R12=0x0'
- 'R13=0x0'
cpu_name: bdver2
llvm_triple: x86_64-unknown-linux-gnu
num_repetitions: 10000
measurements:
- { key: inverse_throughput, value: 0.819, per_snippet_value: 12.285 }
error: ''
info: instruction has tied variables, using static renaming.
assembled_snippet: 5541574156415541545348B8000000000000000049BB00000000000000004883EC08C7042400000000C7442404000000009D48BD000000000000000049BF000000000000000048BB000000000000000048B9000000000000000048BF000000000000000049BA000000000000000048BA000000000000000048BE000000000000000049B8000000000000000049B9000000000000000049BE000000000000000049BC000000000000000049BD0000000000000000490F40C3490F40EF480F40DB480F40CB490F40FA480F40D0480F40F04D0F40C04C0F40CA4C0F40D34D0F40DE4D0F40E14D0F40EC4D0F40F74D0F40FD490F40C35B415C415D415E415F5DC3
...
$ ./bin/llvm-exegesis --opcode-name=CMOV64rr --mode=inverse_throughput --repetition-mode=loop
Check generated assembly with: /usr/bin/objdump -d /tmp/snippet-051eb3.o
---
mode: inverse_throughput
key:
instructions:
- 'CMOV64rr RAX RAX R11 i_0x0'
- 'CMOV64rr RBP RBP RSI i_0x0'
- 'CMOV64rr RBX RBX R9 i_0x0'
- 'CMOV64rr RCX RCX RSI i_0x0'
- 'CMOV64rr RDI RDI RBP i_0x0'
- 'CMOV64rr RDX RDX R9 i_0x0'
- 'CMOV64rr RSI RSI RDI i_0x0'
- 'CMOV64rr R9 R9 R12 i_0x0'
- 'CMOV64rr R10 R10 R11 i_0x0'
- 'CMOV64rr R11 R11 R9 i_0x0'
- 'CMOV64rr R12 R12 RBP i_0x0'
- 'CMOV64rr R13 R13 RSI i_0x0'
- 'CMOV64rr R14 R14 R14 i_0x0'
- 'CMOV64rr R15 R15 R10 i_0x0'
config: ''
register_initial_values:
- 'RAX=0x0'
- 'R11=0x0'
- 'EFLAGS=0x0'
- 'RBP=0x0'
- 'RSI=0x0'
- 'RBX=0x0'
- 'R9=0x0'
- 'RCX=0x0'
- 'RDI=0x0'
- 'RDX=0x0'
- 'R12=0x0'
- 'R10=0x0'
- 'R13=0x0'
- 'R14=0x0'
- 'R15=0x0'
cpu_name: bdver2
llvm_triple: x86_64-unknown-linux-gnu
num_repetitions: 10000
measurements:
- { key: inverse_throughput, value: 0.6083, per_snippet_value: 8.5162 }
error: ''
info: instruction has tied variables, using static renaming.
assembled_snippet: 5541574156415541545348B8000000000000000049BB00000000000000004883EC08C7042400000000C7442404000000009D48BD000000000000000048BE000000000000000048BB000000000000000049B9000000000000000048B9000000000000000048BF000000000000000048BA000000000000000049BC000000000000000049BA000000000000000049BD000000000000000049BE000000000000000049BF000000000000000049B80200000000000000490F40C3480F40EE490F40D9480F40CE480F40FD490F40D1480F40F74D0F40CC4D0F40D34D0F40D94C0F40E54C0F40EE4D0F40F64D0F40FA4983C0FF75C25B415C415D415E415F5DC3
...
$ ./bin/llvm-exegesis --opcode-name=CMOV64rr --mode=inverse_throughput --repetition-mode=min
Check generated assembly with: /usr/bin/objdump -d /tmp/snippet-c7a47d.o
Check generated assembly with: /usr/bin/objdump -d /tmp/snippet-2581f1.o
---
mode: inverse_throughput
key:
instructions:
- 'CMOV64rr RAX RAX R11 i_0x0'
- 'CMOV64rr RBP RBP R10 i_0x0'
- 'CMOV64rr RBX RBX R10 i_0x0'
- 'CMOV64rr RCX RCX RDX i_0x0'
- 'CMOV64rr RDI RDI RAX i_0x0'
- 'CMOV64rr RDX RDX R9 i_0x0'
- 'CMOV64rr RSI RSI RAX i_0x0'
- 'CMOV64rr R9 R9 RBX i_0x0'
- 'CMOV64rr R10 R10 R12 i_0x0'
- 'CMOV64rr R11 R11 RDI i_0x0'
- 'CMOV64rr R12 R12 RDI i_0x0'
- 'CMOV64rr R13 R13 RDI i_0x0'
- 'CMOV64rr R14 R14 R9 i_0x0'
- 'CMOV64rr R15 R15 RBP i_0x0'
config: ''
register_initial_values:
- 'RAX=0x0'
- 'R11=0x0'
- 'EFLAGS=0x0'
- 'RBP=0x0'
- 'R10=0x0'
- 'RBX=0x0'
- 'RCX=0x0'
- 'RDX=0x0'
- 'RDI=0x0'
- 'R9=0x0'
- 'RSI=0x0'
- 'R12=0x0'
- 'R13=0x0'
- 'R14=0x0'
- 'R15=0x0'
cpu_name: bdver2
llvm_triple: x86_64-unknown-linux-gnu
num_repetitions: 10000
measurements:
- { key: inverse_throughput, value: 0.6073, per_snippet_value: 8.5022 }
error: ''
info: instruction has tied variables, using static renaming.
assembled_snippet: 5541574156415541545348B8000000000000000049BB00000000000000004883EC08C7042400000000C7442404000000009D48BD000000000000000049BA000000000000000048BB000000000000000048B9000000000000000048BA000000000000000048BF000000000000000049B9000000000000000048BE000000000000000049BC000000000000000049BD000000000000000049BE000000000000000049BF0000000000000000490F40C3490F40EA490F40DA480F40CA480F40F8490F40D1480F40F04C0F40CB4D0F40D44C0F40DF4C0F40E74C0F40EF4D0F40F14C0F40FD490F40C3490F40EA5B415C415D415E415F5DC35541574156415541545348B8000000000000000049BB00000000000000004883EC08C7042400000000C7442404000000009D48BD000000000000000049BA000000000000000048BB000000000000000048B9000000000000000048BA000000000000000048BF000000000000000049B9000000000000000048BE000000000000000049BC000000000000000049BD000000000000000049BE000000000000000049BF000000000000000049B80200000000000000490F40C3490F40EA490F40DA480F40CA480F40F8490F40D1480F40F04C0F40CB4D0F40D44C0F40DF4C0F40E74C0F40EF4D0F40F14C0F40FD4983C0FF75C25B415C415D415E415F5DC3
...
```
but i open to suggestions as to how test that.
I also have gone with the suggestion to default to this new mode.
This was irking me for some time, so i'm happy to finally see progress here.
Looking forward to feedback.
Reviewers: courbet, gchatelet
Reviewed By: courbet, gchatelet
Subscribers: mstojanovic, RKSimon, llvm-commits, courbet, gchatelet
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D76921
It was added by D76591 for migration purposes (not all
printBranchOperand users have migrated to the overload with `uint64_t Address`).
Now that all have been migrated, the parameter can go away.
The requirement for deopt parameter to be in gc parameter if it can
be modified by GC is very strong and difficult to follow.
The key example of why this can't work:
%p1 = bitcast i8* %p to i8*
statepoint [gc = (%p1)], [deopt = (%p1)]
The optimizer is allowed to replace either use (or both) of %p1 with %p.
If it updates only one of the two (entirely legal), the two sets do not overlap.
So this change removes the strong wording.
Reviewers: reames, dantrushin
Reviewed By: reames
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D77122