Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/548
The fix applies only to Chromium-based browsers -- a
`X-DNS-Prefetch-Control` header[1] will be unconditionally
injected when uBO's "Disable pre-fetching" setting is
enabled (it is by default).
This is a mitigation, this does not completely fix the issue
of the setting "Disable pre-fetching" being disregarded on
Chromium-based browsers when sites use
`preconnect`/`preload`.
[1] https://developer.mozilla.org/docs/Web/HTTP/Headers/X-DNS-Prefetch-Control
Related commit:
- 69a43e07c4
Using 32 bits of token hash rather than just the 16 lower
bits does help discard more unknown tokens.
Using the default filter lists, the known-token lookup
table is populated by 12,276 entries, out of 65,536, thus
making the case that theoretically there is a lot of
possible tokens which can be discarded.
In practice, running the built-in
staticNetFilteringEngine.benchmark() with default filter
lists, I find that 1,518,929 tokens were skipped out of
4,441,891 extracted tokens, or 34%.
Related commit:
- 3f3a1543ea
The regression was preventing uBO to find from which list a filter
originated. This affected only filters for which the `domain=`
option had multiple hostnames.
Given that all tokens extracted from one single URL are potentially
iterated multiple times in a single URL-matching cycle, it pays to
ignore extracted tokens which are known to not be used anywhere in
the static filtering engine.
The gain in processing a single network request in the static
filtering engine can become especially high when dealing with
long and random-looking URLs, which URLs have a high likelihood
of containing a majority of tokens which are known to not be in
use.
Related commit:
- 99390390fc
The token information available at compile time can be stored
in the filter to be used at match() time. This allows the use of
startsWith() rather than a more costly indexOf() call as a first
quick test to detect mismatches.
Due to how web pages typically load secondary resources and due
to how HNTrieContainer instances are used in uBO, there is a
great likelihood that the result of a previous call to
HNTrieRef.matches() can be reused in a subsequent call.
This has been confirmed by instrumenting HNTrieRef.matches().
Since uBO uses distinct HNTrieContainer instances to either
match against the request or the origin hostnames, this
means a high likelihood of repeated calls to HNTrieRef.matches()
with the same hostname as argument, hence a performance gain
when caching the argument+result -- as despite that
HNTrie.matches() is fast, comparing two short strings is even
faster if this allows to skip HNTrie.matches() altogether.
Performance- and memory-related work. Three more classes have
been created to avoid regex-based filters internally.
Purpose is to enforce filters which have only one single
wildcard in their pattern, a common occurrence. The filter
pattern is split in two literal string segments.
Similar as above, with the added condition that the filter is
hostname-anchored (`||`). The "Wildcard2" variant is a further
specialization to enforce filters where the only wildcard
is immediately preceded by the `^` special character, again
a very common occurrence.
Using two literal string segments in lieu of regexes allows to
quickly detect a mismatch by just testing the first segment.
Additionally, this reduces memory footprint as regexes are
much more expensive memory-wise than plain strings.
These three new filter classes allow to replace the use of
5276 regex-based filters internally with plain string-based
filters.
Often-called isHnAnchored() has been further fine-tuned to
avoid as much work as possible. I have also observed that
using an arrow function for closure-purpose helps measurably
performance, as per built-in benchmark.
The purpose of using a custom base128 encoder is to
convert array buffers into strings, to allow a direct
string-to-array buffer conversion at load time:
string => array buffer
Whereas a JSON array would require an extra step:
JSON array as string => JS array => array buffer
Turns out that the current use of a custom base128 encoding
results in a significantly larger selfie storage usage when
converting array buffers into strings.
Speculation: possibly the browser convert the strings to
save into JSON strings internally. Since the custom base128
encoder is likely to cause the resulting string to contain
a lot of unprintable ASCII characters, these will need to
be escaped when converted to JSON -- escaped characters
occupy more space than non-escaped ones.
Using a sequence of base 64 numbers means only printable
will be present in the output string, hence no escaping
necessary. I have observed significant reduction in
storage usage for selfie purpose.