1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 12:02:58 +02:00
llvm-mirror/lib
Chandler Carruth f027f2e482 Tweak the core loop in StringRef::find to avoid calling memcmp on every
iteration.

Instead, load the byte at the needle length, compare it directly, and
save it to use in the lookup table of lengths we can skip forward.

I also added an annotation to expect that the comparison fails so that
the loop gets laid out contiguously without the call to memcpy (and the
substantial register shuffling that the ABI requires of that call).

Finally, because this behaves especially badly with a needle length of
one (by calling memcmp with a zero length) special case that to directly
call memchr, which is what we should have been doing anyways.

This was motivated by the fact that there are a large number of test
cases in 'check-llvm' where FileCheck's performance is dominated by
calls to StringRef::find (in a release, no-asserts build). I'm working
on patches to generally improve matters there, but this alone was worth
a 12.5% improvement in one test case where FileCheck spent 92% of its
time in this routine.

I experimented a bunch with different minor variations on this theme,
for example setting the pointer *at* the last byte and indexing
backwards for the call to memcmp. That didn't improve anything on this
version and seemed more complex. I also tried other things to make the
loop flow more nicely and none worked. =/ It is a bit unfortunate, the
generated code here remains pretty gross, but I don't see any obvious
ways to improve it. At this point, most of my ideas would be really
elaborate:

1) While the remainder of the string is long enough, we could load
   a 16-byte or 32-byte vector at the address of the last byte and use
   palignr to rotate that and check the first 15- or 31-bytes at the
   front of the next segment, essentially pre-loading the first several
   bytes of the next iteration so we could quickly detect a mismatch in
   those bytes without an additional memory access. Down side would be
   the code complexity, having a fallback loop, and likely misaligned
   vector load. Plus it would make the common case of the last byte not
   matching somewhat slower (need some extraction from a vector).
2) While we have space, we could do an aligned load of a 16- or 32-byte
   vector that *contains* the end byte, and use any peceding bytes to
   have a more precise "no" test, and any subsequent bytes could be
   saved for the next iteration. This remove any unaligned load penalty,
   but still requires us to pay the overhead of vector extraction for
   the cases where we didn't need to do anything other than load and
   compare the last byte.
3) Try to walk from the last byte in a way that is more friendly to
   cache and/or memory pre-fetcher considering we have to poke the last
   byte anyways.

No idea if any of these are really worth pursuing though. They all seem
somewhat unlikely to yield big wins in practice and to be a lot of work
and complexity. So I settled here, which at least seems like a strict
improvement over the previous version.

llvm-svn: 289373
2016-12-11 07:46:21 +00:00
..
Analysis [InstSimplify] improve function name; NFC 2016-12-10 17:40:47 +00:00
AsmParser [AsmParser] Avoid recursing when lexing ';'. NFC. 2016-11-16 22:25:05 +00:00
Bitcode Fix MSVC bool to uint64_t promotion warning 2016-12-06 11:12:53 +00:00
CodeGen [SelectionDAG] Add ability for computeKnownBits to peek through bitcasts from 'large element' scalar/vector to 'small element' vector. 2016-12-10 17:00:00 +00:00
DebugInfo Make a DWARF generator so we can unit test DWARF APIs with gtest. 2016-12-08 01:03:48 +00:00
Demangle Demangle: remove references to allocator for default allocator 2016-11-20 00:20:27 +00:00
ExecutionEngine [mips][rtdyld] Merge code to write relocated values to the section. NFC 2016-12-07 11:41:23 +00:00
Fuzzer [libFuzzer] don't depend on time in a test 2016-12-11 06:28:09 +00:00
IR [X86] Remove masking from 512-bit VPERMIL intrinsics in preparation for being able to constant fold them in InstCombineCalls like we do for 128/256-bit. 2016-12-11 01:26:44 +00:00
IRReader Timer: Track name and description. 2016-11-18 19:43:18 +00:00
LibDriver
LineEditor
Linker IR: Move NumElements field from {Array,Vector}Type to SequentialType. 2016-12-02 03:20:58 +00:00
LTO LTO: Hash the parts of the LTO configuration that affect code generation. 2016-12-08 05:28:30 +00:00
MC Add a comment consumer mechanism to MCAsmLexer 2016-12-08 10:31:21 +00:00
Object [Object][MachO] Reference-ify some helper function arguments. NFC. 2016-12-04 01:56:10 +00:00
ObjectYAML [ObjectYAML] Support for DWARF debug_aranges 2016-12-09 00:26:44 +00:00
Option Generalize ArgList::AddAllArgs more 2016-09-29 19:47:58 +00:00
Passes [PM] Support invalidation of inner analysis managers from a pass over the outer IR unit. 2016-12-10 06:34:44 +00:00
ProfileData Make the Error class constructor protected 2016-11-11 04:28:40 +00:00
Support Tweak the core loop in StringRef::find to avoid calling memcmp on every 2016-12-11 07:46:21 +00:00
TableGen [TableGen] Centralize/Unify error handling. 2016-12-05 22:58:01 +00:00
Target [X86] Fix a comment to say 'an FMA' instead of 'a FMA'. NFC 2016-12-11 01:28:08 +00:00
Transforms [X86][InstCombine] Teach InstCombineCalls to simplify demanded elements for scalar FMA intrinsics. 2016-12-11 07:42:06 +00:00
CMakeLists.txt Try to fix a circular dependency in the modules build. 2016-09-06 20:16:19 +00:00
LLVMBuild.txt Add an c++ itanium demangler to llvm. 2016-09-06 19:16:48 +00:00