mirror of
https://github.com/RPCS3/llvm-mirror.git
synced 2025-02-01 05:01:59 +01:00
ca7b7a7578
Summary: This is motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. In this particular case, given ``` char* test(char& base, unsigned long offset) { return &base + offset; } ``` it will end up producing something like https://godbolt.org/z/LK5-iH which after optimizations reduces down to roughly ``` define i1 @t0(i8* nonnull %base, i64 %offset) { %base_int = ptrtoint i8* %base to i64 %adjusted = add i64 %base_int, %offset %non_null_after_adjustment = icmp ne i64 %adjusted, 0 %no_overflow_during_adjustment = icmp uge i64 %adjusted, %base_int %res = and i1 %non_null_after_adjustment, %no_overflow_during_adjustment ret i1 %res } ``` Without D67122 there was no `%non_null_after_adjustment`, and in this particular case we can get rid of the overhead: Here we add some offset to a non-null pointer, and check that the result does not overflow and is not a null pointer. But since the base pointer is already non-null, and we check for overflow, that overflow check will already catch the null pointer, so the separate null check is redundant and can be dropped. Alive proofs: https://rise4fun.com/Alive/WRzq There are more patterns of "unsigned-add-with-overflow", they are not handled here, but this is the main pattern, that we currently consider canonical, so it makes sense to handle it. https://bugs.llvm.org/show_bug.cgi?id=43246 Reviewers: spatel, nikic, vsk Reviewed By: spatel Subscribers: hiraditya, llvm-commits, reames Tags: #llvm Differential Revision: https://reviews.llvm.org/D67332 llvm-svn: 371349
Analysis Opportunities: //===---------------------------------------------------------------------===// In test/Transforms/LoopStrengthReduce/quadradic-exit-value.ll, the ScalarEvolution expression for %r is this: {1,+,3,+,2}<loop> Outside the loop, this could be evaluated simply as (%n * %n), however ScalarEvolution currently evaluates it as (-2 + (2 * (trunc i65 (((zext i64 (-2 + %n) to i65) * (zext i64 (-1 + %n) to i65)) /u 2) to i64)) + (3 * %n)) In addition to being much more complicated, it involves i65 arithmetic, which is very inefficient when expanded into code. //===---------------------------------------------------------------------===// In formatValue in test/CodeGen/X86/lsr-delayed-fold.ll, ScalarEvolution is forming this expression: ((trunc i64 (-1 * %arg5) to i32) + (trunc i64 %arg5 to i32) + (-1 * (trunc i64 undef to i32))) This could be folded to (-1 * (trunc i64 undef to i32)) //===---------------------------------------------------------------------===//