The `null` placeholder are not necessary, we can just use
default arguments instead, and add the HNTrieContainer
references if and only if they are instanciated.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/550
Related Chromium issue (I can't access it):
- https://bugs.chromium.org/p/chromium/issues/detail?id=957866
Findings so far: affects browsers based on Chromium 74.
I could not reproduce the issue with either Chromium 73 or
Google Chrome 75.
This commit is a mitigation: to prevent sites from using
uBO's internal WAR secret for tracking purpose. A secret
can be used for at most one second, after which a new secret
is generated.
The original issue related to the implementation of
secret-gated web accessible resources is:
- https://github.com/gorhill/uBlock/issues/2823
The purpose of this new `:nth-ancestor(n)` operator is to
lookup the nth ancestor relative to the currently selected
node.
It is essentially equivalent to `:xpath(..)`, where
ancestor distance is expressed as a number rather than a
sequence of slash-separated `..`.
The rationale to introduce this new procedural selector
is to have a low overhead way to accomplish ancestor
selection.
This commit implements the alphabetical ordering of HNTrie
nodes, so as to make it possible to bail out early at
HNTrie.matches() time.
Contrary to what I expected, there is no performance gain
observed to HNTrie.matches() as per benchmarks -- I find
the results perplexing.
Because of this I will revert this commit immediately.
The purpose of this commit is to record the changes so
that I can bring them back to life in the future whenever
I want to investigate further.
Consider the two following filters:
example.com
www.example.com
This commit make it so that if the first filter is
already present in a given HNTrie, the second filter
will not be stored, since HNTrie will _always_
return the first filter as a match whenever the
hostname to match is example.com or any subdomain
of example.com.
The detection of such pointless filters is
virtually free when adding a hostname to an HNTrie
instance (given how data is stored in the trie), so
in practice no overhead is incurred to detect such
pointless filters.
The ability to ignore impossible to match filters
in HNTrie instances will _especially_ benefit those
using large hosts files.
Examples of how this helps using real configurations:
- Default lists:
444 filters out of 100,382 were ignored as a result
of this commit.
- Default lists + "Energized Ultimate Protection":
283,669 filters out of 903,235 were ignored as a
result of this commit.
Side note: There was no measurable difference between
the two configurations above in the performance of
the matching algorithm as reported by the built-in
benchmark tool.
The staticNetFilteringEngine uses token hashes to store/lookup
filters into Map objects.
Before this commit, the tokens were encoded into token hashes
as JS numbers (not exceeding MAX_SAFE_INTEGER) using at most
the 8 first characters of the token.
With this commit, token hashes are now restricted to fit
into 32-bit integers, and are derived from at most the 7 first
characters. This improves filter look-up performance as per
built-in benchmark().
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/548
The fix applies only to Chromium-based browsers -- a
`X-DNS-Prefetch-Control` header[1] will be unconditionally
injected when uBO's "Disable pre-fetching" setting is
enabled (it is by default).
This is a mitigation, this does not completely fix the issue
of the setting "Disable pre-fetching" being disregarded on
Chromium-based browsers when sites use
`preconnect`/`preload`.
[1] https://developer.mozilla.org/docs/Web/HTTP/Headers/X-DNS-Prefetch-Control
Related commit:
- 69a43e07c4
Using 32 bits of token hash rather than just the 16 lower
bits does help discard more unknown tokens.
Using the default filter lists, the known-token lookup
table is populated by 12,276 entries, out of 65,536, thus
making the case that theoretically there is a lot of
possible tokens which can be discarded.
In practice, running the built-in
staticNetFilteringEngine.benchmark() with default filter
lists, I find that 1,518,929 tokens were skipped out of
4,441,891 extracted tokens, or 34%.
Related commit:
- 3f3a1543ea
The regression was preventing uBO to find from which list a filter
originated. This affected only filters for which the `domain=`
option had multiple hostnames.
Given that all tokens extracted from one single URL are potentially
iterated multiple times in a single URL-matching cycle, it pays to
ignore extracted tokens which are known to not be used anywhere in
the static filtering engine.
The gain in processing a single network request in the static
filtering engine can become especially high when dealing with
long and random-looking URLs, which URLs have a high likelihood
of containing a majority of tokens which are known to not be in
use.
Related commit:
- 99390390fc
The token information available at compile time can be stored
in the filter to be used at match() time. This allows the use of
startsWith() rather than a more costly indexOf() call as a first
quick test to detect mismatches.
Due to how web pages typically load secondary resources and due
to how HNTrieContainer instances are used in uBO, there is a
great likelihood that the result of a previous call to
HNTrieRef.matches() can be reused in a subsequent call.
This has been confirmed by instrumenting HNTrieRef.matches().
Since uBO uses distinct HNTrieContainer instances to either
match against the request or the origin hostnames, this
means a high likelihood of repeated calls to HNTrieRef.matches()
with the same hostname as argument, hence a performance gain
when caching the argument+result -- as despite that
HNTrie.matches() is fast, comparing two short strings is even
faster if this allows to skip HNTrie.matches() altogether.
Performance- and memory-related work. Three more classes have
been created to avoid regex-based filters internally.
Purpose is to enforce filters which have only one single
wildcard in their pattern, a common occurrence. The filter
pattern is split in two literal string segments.
Similar as above, with the added condition that the filter is
hostname-anchored (`||`). The "Wildcard2" variant is a further
specialization to enforce filters where the only wildcard
is immediately preceded by the `^` special character, again
a very common occurrence.
Using two literal string segments in lieu of regexes allows to
quickly detect a mismatch by just testing the first segment.
Additionally, this reduces memory footprint as regexes are
much more expensive memory-wise than plain strings.
These three new filter classes allow to replace the use of
5276 regex-based filters internally with plain string-based
filters.
Often-called isHnAnchored() has been further fine-tuned to
avoid as much work as possible. I have also observed that
using an arrow function for closure-purpose helps measurably
performance, as per built-in benchmark.