1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 12:12:47 +01:00
Commit Graph

216773 Commits

Author SHA1 Message Date
Arthur Eubanks
8967f0ab59 [test] Properly match parameter/argument ABI attributes
These were found with D103412.
2021-05-31 09:12:18 -07:00
Andrea Di Biagio
44ebb7579c [MCA][NFCI] Minor changes to InstrBuilder and Instruction.
This is based on the assumption that most simulated instructions don't define
more than one or two registers. This is true for example on x86, where
most instruction definitions don't declare more than one register write.

The default code region size has been increased from 8 to 16. This is based on
the assumption that, for small microbenchmarks, the typical code snippet size is
often less than 16 instructions.

mca::Instruction now uses bitfields to pack flags.
No functional change intended.
2021-05-31 17:05:13 +01:00
Arthur Eubanks
0bb0030f11 [test] Fix addr-label.ll after D99707
Needs REQUIRES.
2021-05-31 09:02:07 -07:00
Arthur Eubanks
7178161bf7 Remove "Rewrite Symbols" from codegen pipeline
It breaks up the function pass manager in the codegen pipeline.

With empty parameters, it looks at the -mllvm flag -rewrite-map-file.
This is likely not in use.

Add a check that we only have one function pass manager in the codegen
pipeline.

Some tests relied on the fact that we had a module pass somewhere in the
codegen pipeline.

addr-label.ll crashes on ARM due to this change. This is because a
ARMConstantPoolConstant containing a BasicBlock to represent a
blockaddress may hold an invalid pointer to a BasicBlock if the
blockaddress is invalidated by its BasicBlock getting removed. In that
case all referencing blockaddresses are RAUW a constant int. Making
ARMConstantPoolConstant::CVal a WeakVH fixes the crash, but I'm not sure
that's the right fix. As a workaround, create a barrier right before
ISel so that IR optimizations can't happen while a
ARMConstantPoolConstant has been created.

Reviewed By: rnk, MaskRay, compnerd

Differential Revision: https://reviews.llvm.org/D99707
2021-05-31 08:32:36 -07:00
Anirudh Prasad
37fdc69b24 [AsmParser][SystemZ][z/OS] Introducing HLASM Parser support to AsmParser - Part 2
- This patch is the second (and hopefully final) part of providing HLASM syntax for inline asm statements for z/OS to LLVM (continuing on from https://reviews.llvm.org/D98276)
- This second part deals with providing label support
- As mentioned in https://reviews.llvm.org/D98276, if the first token is not a space we process the first token as a label, and the remaining tokens as a possible machine instruction
- To achieve this, a new `parseAsHLASMLabel` function is introduced. This function processes the first token, validates whether it is an "acceptable" label according to HLASM standards, and then emits it
- After handling and emitting the label, call the `parseAsMachineInstruction` instruction to process the remaining tokens as a machine instruction.

Reviewed By: uweigand

Differential Revision: https://reviews.llvm.org/D103320
2021-05-31 11:27:02 -04:00
Daniil Fukalov
47fca931a0 [NFC] MemoryDependenceAnalysis cleanup.
1. Removed redundant includes,
2. Removed never defined and used `releaseMemory()`.
3. Fixed member functions names first letter case.
4. Renamed duplicate (in nested struct `NonLocalPointerInfo`) name
   `NonLocalDeps` to `NonLocalDepsMap`.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D102358
2021-05-31 18:07:55 +03:00
Sanjay Patel
4f81b668e8 [SDAG] add check to sext-of-setcc fold to bypass changing a legal op
I accidentaly pushed a draft of D103280 that was discussed
during the review, but it was not supposed to be the final
version.

Rather than revert and recommit, I'm updating the existing
code. This way we have a record of the codegen diff that
would result if we decide to remove this predicate in the
future.
2021-05-31 08:58:11 -04:00
Roman Lebedev
471f397407 [NFC] ScalarEvolution: apply SSO to the ExprValueMap value
ExprValueMap is a map from SCEV * to a set-vector of (Value *, ConstantInt *) pair,
and while the map itself will likely be big-ish (have many keys),
it is a reasonable assumption that each key will refer to a small-ish
number of pairs.

In particular looking at n=512 case from
https://bugs.llvm.org/show_bug.cgi?id=50384,
the small-size of 4 appears to be the sweet spot,
it results in the least allocations while minimizing memory footprint.
```
$ for i in $(ls heaptrack.opt.*.gz); do echo $i; heaptrack_print $i | tail -n 6; echo ""; done
heaptrack.opt.0-orig.gz
total runtime: 14.32s.
calls to allocation functions: 8222442 (574192/s)
temporary memory allocations: 2419000 (168924/s)
peak heap memory consumption: 190.98MB
peak RSS (including heaptrack overhead): 239.65MB
total memory leaked: 67.58KB

heaptrack.opt.1-n1.gz
total runtime: 13.72s.
calls to allocation functions: 7184188 (523705/s)
temporary memory allocations: 2419017 (176338/s)
peak heap memory consumption: 191.38MB
peak RSS (including heaptrack overhead): 239.64MB
total memory leaked: 67.58KB

heaptrack.opt.2-n2.gz
total runtime: 12.24s.
calls to allocation functions: 6146827 (502355/s)
temporary memory allocations: 2418997 (197695/s)
peak heap memory consumption: 163.31MB
peak RSS (including heaptrack overhead): 211.01MB
total memory leaked: 67.58KB

heaptrack.opt.3-n4.gz
total runtime: 12.28s.
calls to allocation functions: 6068532 (494260/s)
temporary memory allocations: 2418985 (197017/s)
peak heap memory consumption: 155.43MB
peak RSS (including heaptrack overhead): 201.77MB
total memory leaked: 67.58KB

heaptrack.opt.4-n8.gz
total runtime: 12.06s.
calls to allocation functions: 6068042 (503321/s)
temporary memory allocations: 2418992 (200646/s)
peak heap memory consumption: 166.03MB
peak RSS (including heaptrack overhead): 213.55MB
total memory leaked: 67.58KB

heaptrack.opt.5-n16.gz
total runtime: 12.14s.
calls to allocation functions: 6067993 (499958/s)
temporary memory allocations: 2418999 (199307/s)
peak heap memory consumption: 187.24MB
peak RSS (including heaptrack overhead): 233.69MB
total memory leaked: 67.58KB
```

While that test may be an edge worst-case scenario,
https://llvm-compile-time-tracker.com/compare.php?from=dee85d47d9f15fc268f7b18f279dac2774836615&to=98a57e31b1947d5bcdf4a5605ac2ab32b4bd5f63&stat=instructions
agrees that this also results in improvements in the usual situations.
2021-05-31 15:34:03 +03:00
Alexey Lapshin
7ec130df24 [llvm-objcopy][NFC] Refactor CopyConfig structure - remove lazy options processing.
During reviewing D102277 it was decided to remove lazy options processing
from llvm-objcopy CopyConfig structure. This patch transforms processing of ELF
lazy options into the in-place processing.

Differential Revision: https://reviews.llvm.org/D103260
2021-05-31 14:40:27 +03:00
Sanjay Patel
dceff7ed2b [SDAG] try harder to fold casts into vector compare
sext (vsetcc X, Y) --> vsetcc (zext X), (zext Y) --
(when the zexts are free and a bunch of other conditions)

We have a couple of similar folds to this already for vector selects,
but this pattern slips through because it is only a setcc.

The tests are based on the motivating case from:
https://llvm.org/PR50055
...but we need extra logic to get that example, so I've left that as
a TODO for now.

Differential Revision: https://reviews.llvm.org/D103280
2021-05-31 07:14:01 -04:00
Djordje Todorovic
c362f0a17e [LiveDebugVariables] Stop trimming locations of non-inlined vars
The D35953, D62650 and D73691 introduced trimming of variables locations
in LiveDebugVariables pass, since there are some cases where after
the virtregrewrite we have exploded number of DBG_VALUEs created for some
inlined variables. As it looks, all problematic cases were regarding
inlined variables, so it seems reasonable to stop trimming the location
ranges for non-inlined variables.
It has very good impact on the llvm-locstats report.

Differential Revision: https://reviews.llvm.org/D102917
2021-05-31 02:59:19 -07:00
Fraser Cormack
0cd9460e0b [RISCV] Scale scalably-typed split argument offsets by VSCALE
This patch fixes a bug in lowering scalable-vector types in RISC-V's
main calling convention. When scalable-vector types are split and passed
indirectly, the target is responsible for scaling the offset --
initially set to the known-minimum store size -- by the scalable factor.

Before this we were issuing overlapping loads or stores to the different
parts, leading to incorrect codegen.

Credit to @HsiangKai for spotting this.

Reviewed By: HsiangKai

Differential Revision: https://reviews.llvm.org/D103262
2021-05-31 10:43:13 +01:00
Juneyoung Lee
d233f607af [InsCombine] Fix a few remaining vec transforms to use poison instead of undef
This is a patch that replaces shufflevector and insertelement's placeholder value with poison.

Underlying motivation is to fix the semantics of shufflevector with undef mask to return poison instead
(D93818)
The consensus has been made in the late 2020 via mailing list as well as the thread in https://bugs.llvm.org/show_bug.cgi?id=44185 .

This patch is a simple syntactic change to the existing code, hence directly pushed as a commit.
2021-05-31 18:47:09 +09:00
David Green
3f23eb7a31 [DSE] Remove stores in the same loop iteration
DSE will currently only remove stores in the same block unless they can
be guaranteed to be loop invariant. This expands that to any stores that
are in the same Loop, at the same loop level. This should still account
for where AA/MSSA will not handle aliasing between loops, but allow the
dead stores to be removed where they overlap in the same loop iteration.
It requires adding loop info to DSE, but that looks fairly harmless.

The test case this helps is from code like this, which can come up in
certain matrix operations:
  for(i=..)
    dst[i] = 0;
    for(j=..)
      dst[i] += src[i*n+j];

After LICM, this becomes:
for(i=..)
  dst[i] = 0;
  sum = 0;
  for(j=..)
    sum += src[i*n+j];
  dst[i] = sum;

The first store is dead, and with this patch is now removed.

Differntial Revision: https://reviews.llvm.org/D100464
2021-05-31 10:22:37 +01:00
Fraser Cormack
07b2a29ec8 [RISCV] Support vector conversions between fp and i1
This patch custom lowers FP_TO_[US]INT and [US]INT_TO_FP conversions
between floating-point and boolean vectors. As the default action is
scalarization, this patch both supports scalable-vector conversions and
improves the code generation for fixed-length vectors.

The lowering for these conversions can piggy-back on the existing
lowering, which lowers the operations to a supported narrowing/widening
conversion and then either an extension or truncation.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D103312
2021-05-31 09:55:39 +01:00
Andy Wingo
34f735fb88 Revert "[WebAssembly][CodeGen] IR support for WebAssembly local variables"
This reverts commit bf35f4af51cddd743435bb6b94a45592c967891a.  There was
an error in a shared-library build.
2021-05-31 10:55:15 +02:00
Andy Wingo
6faf61e8ac [WebAssembly][CodeGen] IR support for WebAssembly local variables
This patch adds TargetStackID::WasmLocal.  This stack holds locations of
values that are only addressable by name -- not via a pointer to memory.
For the WebAssembly target, these objects are lowered to WebAssembly
local variables, which are managed by the WebAssembly run-time and are
not addressable by linear memory.

For the WebAssembly target IR indicates that an AllocaInst should be put
on TargetStackID::WasmLocal by putting it in the non-integral address
space WASM_ADDRESS_SPACE_WASM_VAR, with value 1.  SROA will mostly lift
these allocations to SSA locals, but any alloca that reaches instruction
selection (usually in non-optimized builds) will be assigned the new
TargetStackID there.  Loads and stores to those values are transformed
to new WebAssemblyISD::LOCAL_GET / WebAssemblyISD::LOCAL_SET nodes,
which then lower to the type-specific LOCAL_GET_I32 etc instructions via
tablegen patterns.

Differential Revision: https://reviews.llvm.org/D101140
2021-05-31 10:40:38 +02:00
cynecx
be221f0b41 [LangRef] update according to unwinding support in inline asm
https://reviews.llvm.org/D95745 introduced a new `unwind` keyword for inline assembler expressions. Inline asms marked with the `unwind` keyword allows stack unwinding from inline assembly because the compiler emits unwinding information ("around" the inline asm) as it would for calls/invokes. Unwinding the stack from within non-unwind inline asm may cause UB.

Reviewed By: Amanieu

Differential Revision: https://reviews.llvm.org/D102642
2021-05-31 09:01:46 +01:00
Hyeongyu Kim
f98f660050 [InstCombine] Fix miscompile on GEP+load to icmp fold (PR45210)
As noted in PR45210: https://bugs.llvm.org/show_bug.cgi?id=45210
...the bug is triggered as Eli say when sext(idx) * ElementSize overflows.

```
   // assume that GV is an array of 4-byte elements
   GEP = gep GV, 0, Idx // this is accessing Idx * 4
   L = load GEP
   ICI = icmp eq L, value
 =>
   ICI = icmp eq Idx, NewIdx
```

The foldCmpLoadFromIndexedGlobal function simplifies GEP+load operation to icmp.
And there is a problem because Idx * ElementSize can overflow.

Let's assume that the wanted value is at offset 0.
Then, there are actually four possible values for Idx to match offset 0: 0x00..00, 0x40..00, 0x80..00, 0xC0..00.
We should return true for all these values, but currently, the new icmp only returns true for 0x00..00.

This problem can be solved by masking off (trailing zeros of ElementSize) bits from Idx.

```
   ...
 =>
   Idx' = and Idx, 0x3F..FF
   ICI = icmp eq Idx', NewIdx
```

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D99481
2021-05-31 14:08:20 +09:00
Ben Shi
48eae93fc4 [AVR][NFC] Refactor 8-bit & 16-bit shifts
Reviewed By: dylanmckay

Differential Revision: https://reviews.llvm.org/D98335
2021-05-31 10:30:46 +08:00
David Green
fd282c40e4 [ARM] Guard against loop variant gather ptr operands
This ensures that the operands of any gather/scatter instructions that
we attempt to push out of the loop are invariant, preventing invalid IR
from being generated.
2021-05-30 18:02:14 +01:00
Ben Shi
f7319a117e [AVR] Improve inline assembly
Reviewed By: dylanmckay

Differential Revision: https://reviews.llvm.org/D96394
2021-05-30 23:44:43 +08:00
Florian Hahn
14d190e2ce [LoopDeletion] Add more tests with infinite sub-loops & mustprogress.
A couple of additional tests inspired by PR50511.
2021-05-30 16:41:57 +01:00
Florian Hahn
24266d4661 [VectorCombine] Add tests with noundef index for load scalarization. 2021-05-30 12:15:41 +01:00
Sanjay Patel
c5aaaaa9b9 [InstCombine] fix miscompile from vector select substitution
This is similar to the fix in c590a9880d7a ( PR49832 ), but
we missed handling the pattern for select of bools (no compare
inst).

We can't substitute a vector value because the equality condition
replacement that we are attempting requires that the condition
is true/false for the entire value. Vector select can be partly
true/false.

I added an assert for vector types, so we shouldn't hit this again.
Fixed formatting while auditing the callers.

https://llvm.org/PR50500
2021-05-30 07:11:58 -04:00
Florian Hahn
0fab9e3072 [DAGCombine] Poison-prove scalarizeExtractedVectorLoad.
extractelement is poison if the index is out-of-bounds, so just
scalarizing the load may introduce an out-of-bounds load, which is UB.

To avoid introducing new UB, we can mask the index so it only contains
valid indices.

Fixes PR50382.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D103077
2021-05-30 11:40:55 +01:00
Mindong Chen
fee80286d7 [NFCI] Move DEBUG_TYPE definition below #includes
When you try to define a new DEBUG_TYPE in a header file, DEBUG_TYPE
definition defined around the #includes in files include it could
result in redefinition warnings even compile errors.

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D102594
2021-05-30 17:31:01 +08:00
Pengxuan Zheng
e434a8e756 [SafeStack] Use proper API to get stack guard
Using the proper API automatically sets `__stack_chk_guard` to `dso_local` if
`Reloc::Static`. This wasn't strictly necessary until recently when dso_local was
no longer implied by `TargetMachine::shouldAssumeDSOLocal` for
`__stack_chk_guard`. By using the proper API, we can avoid generating unnecessary
GOT relocations.

Reviewed By: vitalybuka

Differential Revision: https://reviews.llvm.org/D102646
2021-05-30 00:52:48 -07:00
Arthur Eubanks
f9c1930dea Revert "[TargetLowering] Only inspect attributes in the arguments for ArgListEntry"
This reverts commit 1c7f32334d4becc725b9025fd32291a0e5729acd.

Some code still needs to properly set parameter ABI attributes, see
D101806.
2021-05-29 23:08:15 -07:00
Arthur Eubanks
85767d0682 Revert "[NFC] Use ArgListEntry indirect types more in ISel lowering"
This reverts commit bc7d15c61da78864b35e3c114294d6e4db645611.

Dependent change is to be reverted.
2021-05-29 22:40:33 -07:00
Fangrui Song
63eb5ce7f4 [InstrProfiling][test] Improve tests 2021-05-29 14:30:44 -07:00
David Green
bb36d4c212 [ARM] Guard against WhileLoopStart kill flags
If the operand of the WhileLoopStart is flagged as killed, that
currently gets propogated to both the t2CMPri as the instruction is
reverted, and the newly created t2DoLoopStart. Only the second should
remain as killing the operand, the first dropping the flags.
2021-05-29 21:04:26 +01:00
Jessica Clarke
246602f2b6 Revert "[RISCV] Remove -riscv-no-aliases in favour of new -M no-aliases"
The replacement doesn't work for llc, but it is needed by
patchable-function-entry.ll.

This reverts commit aa9a30b83a06e3e5e68e32ea645ec2d9edc27efc.
2021-05-29 15:11:37 +01:00
Jessica Clarke
d9e3f6fdb8 [Support] Fix getMainExecutable on FreeBSD when called via an absolute path
On FreeBSD, absolute paths are passed unmodified in AT_EXECPATH, but
relative paths are resolved to absolute paths, and any symlinks will be
followed in the process. This means that the resource dir calculation
will be wrong if Clang is invoked as an absolute path to a symlink, and
this currently causes clang/test/Driver/rocm-detect.hip to fail on
FreeBSD. Thus, make sure to call realpath on the result, just like is
done on macOS.

Whilst here, clean up the old fallback auxargs loop to use the actual
type for auxargs rather than using lots of hacky casts that rely on
addresses and pointers being the same (which is not the case on CHERI,
and thus Arm's prototype Morello, although for little-endian systems it
happens to work still as the word-sized integer will be padded to a full
pointer, and it's someone academic given dereferencing past the end of
environ will give a bounds fault, but CheriBSD is new enough that the
elf_aux_info path will be used). This also makes the code easier to
follow, and removes the confusing double-increment of p.

Reviewed By: dim, arichardson

Differential Revision: https://reviews.llvm.org/D103346
2021-05-29 14:59:46 +01:00
Jessica Clarke
f8419889e5 [RISCV] Remove -riscv-no-aliases in favour of new -M no-aliases
Whilst here, also remove a couple of unnecessary -o - instances.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D103201
2021-05-29 14:58:28 +01:00
Sanjay Patel
7de8b3d369 [InstCombine] fold zext of masked bit set/clear
This does not solve PR17101, but it is one of the
underlying diffs noted here:
https://bugs.llvm.org/show_bug.cgi?id=17101#c8

We could ease the one-use checks for the 'clear'
(no 'not' op) half of the transform, but I do not
know if that asymmetry would make things better
or worse.

Proofs:
https://rise4fun.com/Alive/uVB

Name: masked bit set
%sh1 = shl i32 1, %y
%and = and i32 %sh1, %x
%cmp = icmp ne i32 %and, 0
%r = zext i1 %cmp to i32
=>
%s = lshr i32 %x, %y
%r = and i32 %s, 1

Name: masked bit clear
%sh1 = shl i32 1, %y
%and = and i32 %sh1, %x
%cmp = icmp eq i32 %and, 0
%r = zext i1 %cmp to i32
=>
%xn = xor i32 %x, -1
%s = lshr i32 %xn, %y
%r = and i32 %s, 1

Note: this is a re-post of a patch that I committed at:
rGa041c4ec6f7a

The commit was reverted because it exposed another bug:
rGb212eb7159b40

But that has since been corrected with:
rG8a156d1c2795189 ( D101191 )

Differential Revision: https://reviews.llvm.org/D72396
2021-05-29 08:52:26 -04:00
Sanjay Patel
d6808ae8f5 [InstCombine] reduce code duplication; NFC 2021-05-29 08:33:25 -04:00
Ulrich Weigand
bf93bb1269 [SystemZ] Set getExtendForAtomicOps to ISD::ANY_EXTEND
The implementation of subword atomics does not actually
guarantee the result is zero-extended, which now caused
build bot failures after https://reviews.llvm.org/D101342
was landed.
2021-05-29 12:15:18 +02:00
LLVM GN Syncbot
3dc711e872 [gn build] Port b13edf6e907b 2021-05-29 07:51:43 +00:00
Nikita Popov
f403b1e9b2 [LoopUnroll] Make DomTree explicitly required (NFC)
Some of the code was already assuming that DT is non-null, so
make that requirement more explicit and remove unnecessary null
checks.
2021-05-29 09:37:32 +02:00
LemonBoy
f2c842c94a [AtomicExpandPass][AArch64] Promote xchg with floating-point types to integer ones
Follow the same strategy used for atomic loads/stores by converting the operands to equally-sized integer types.
This change prevents the atomic expansion pass from generating illegal LL/SC pairs when targeting AArch64: `expand-atomicrmw-xchg-fp.ll` would previously instantiate intrinsics such as `llvm.aarch64.ldaxr.p0f32` that cannot be lowered.

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D103232
2021-05-29 08:57:27 +02:00
Fangrui Song
f8cdbd49ed [InstrProfiling][test] Fix stale linkage.ll 2021-05-28 21:33:33 -07:00
Fangrui Song
4e9598d950 [InstrProfiling][test] Fix stale tests
* Change linkage/visibility of __profn_ variables to match the reality
* alwaysinline.ll: Add "EnableValueProfiling", otherwise it doesn't test available_externally alwaysinline.
* Delete PR23499.ll - covered by other comdat tests.
2021-05-28 21:14:03 -07:00
Luke
7105889760 [RISCV] Enable interleaved vectorization for RVV
Enable interleaved vectorization for RVV.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D101469
2021-05-29 11:03:27 +08:00
Fangrui Song
df99c4fbee [Internalize] Simplify comdat renaming with noduplicates after D103043
I realized that we can use `comdat noduplicates` which is available on ELF.
Add a special case for wasm which doesn't support the feature.
2021-05-28 16:58:38 -07:00
Amara Emerson
32418dbc82 [AArch64][GlobalISel] Fix a crash during selection of a G_ZEXT(s8 = G_LOAD)
We have special handling for a zext of a load <32b because the load does a zext
for free. In that case, we just select the G_ZEXT as if it were a copy but this
triggered the copy checking code to balk at the mismatched size.

This was being hidden because normally these get combined into G_ZEXTLOAD but
for atomics this doesn't happen. The test case here just uses a normal load
because the particular atomic isn't supported yet anyway.
2021-05-28 16:35:24 -07:00
Nikita Popov
3398903f57 [LoopUnroll] Use changeToUnreachable() (NFC)
When fulling unrolling with a non-latch exit, the latch block is
folded to unreachable. Replace this folding with the existing
changeToUnreachable() helper, rather than performing it manually.

This also moves the fold to happen after the manual DT update
for exit blocks. I believe this is correct in that the conversion
of an unconditional backedge into unreachable should not affect
the DT at all.

Differential Revision: https://reviews.llvm.org/D103340
2021-05-29 00:11:21 +02:00
Craig Topper
3771a3d0c5 [RISCV] Add separate MxList tablegen classes for widening/narrowing and sext.zext.vf2/4/8. NFC
This is cleaner than slicing the MxList to remove elements from
the beginning or end since that requires hardcoding the size.

I don't expect the size of the list to change, but we shouldn't
repeat it in multiple places.
2021-05-28 14:06:19 -07:00
Nikita Popov
bd151f910c [LoopUnroll] Add store to unreachable latch test (NFC)
This is to show that we currently only convert the terminator to
unreachable, but don't clean up instructions before it (unless
trivial DCE removes them).

Also clean up excessive whitespace in this test.
2021-05-28 22:49:23 +02:00
Nikita Popov
7e30b2046c [LoopUnroll] Clean up exit folding (NFC)
This does some non-functional cleanup of exit folding during
unrolling. The two main changes are:

 * First rewrite latch->header edges, which is unrelated to exit
   folding.
 * Combine folding for latch and non-latch exits. After the
   previous change, the only difference in their logic is that
   for non-latch exits we currently only fold "known non-exit"
   cases, but not "known exit" cases.

I think this helps a lot to clarify this code and prepare it for
future changes.

Differential Revision: https://reviews.llvm.org/D103333
2021-05-28 22:31:13 +02:00