To be used at the console, as an investigation tool for
development purpose.
Using it to verify the content of the largest
FilterHostnameDict instance, I spotted an all-uppercase
hostname in the HNTrieRef instance:
µBlock.staticNetFilteringEngine.categories.get(0).get(0x10000000).dict.dump();
Thus the changes to static-net-filtering.js are to fix
the erroneous insertion of filters with uppercase
characters. The single instance found was a hostname entry
in Malware Domain List (TRIANGLESERVICESLTD dot COM).
The `null` placeholder are not necessary, we can just use
default arguments instead, and add the HNTrieContainer
references if and only if they are instanciated.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/550
Related Chromium issue (I can't access it):
- https://bugs.chromium.org/p/chromium/issues/detail?id=957866
Findings so far: affects browsers based on Chromium 74.
I could not reproduce the issue with either Chromium 73 or
Google Chrome 75.
This commit is a mitigation: to prevent sites from using
uBO's internal WAR secret for tracking purpose. A secret
can be used for at most one second, after which a new secret
is generated.
The original issue related to the implementation of
secret-gated web accessible resources is:
- https://github.com/gorhill/uBlock/issues/2823
The purpose of this new `:nth-ancestor(n)` operator is to
lookup the nth ancestor relative to the currently selected
node.
It is essentially equivalent to `:xpath(..)`, where
ancestor distance is expressed as a number rather than a
sequence of slash-separated `..`.
The rationale to introduce this new procedural selector
is to have a low overhead way to accomplish ancestor
selection.
This commit implements the alphabetical ordering of HNTrie
nodes, so as to make it possible to bail out early at
HNTrie.matches() time.
Contrary to what I expected, there is no performance gain
observed to HNTrie.matches() as per benchmarks -- I find
the results perplexing.
Because of this I will revert this commit immediately.
The purpose of this commit is to record the changes so
that I can bring them back to life in the future whenever
I want to investigate further.
Consider the two following filters:
example.com
www.example.com
This commit make it so that if the first filter is
already present in a given HNTrie, the second filter
will not be stored, since HNTrie will _always_
return the first filter as a match whenever the
hostname to match is example.com or any subdomain
of example.com.
The detection of such pointless filters is
virtually free when adding a hostname to an HNTrie
instance (given how data is stored in the trie), so
in practice no overhead is incurred to detect such
pointless filters.
The ability to ignore impossible to match filters
in HNTrie instances will _especially_ benefit those
using large hosts files.
Examples of how this helps using real configurations:
- Default lists:
444 filters out of 100,382 were ignored as a result
of this commit.
- Default lists + "Energized Ultimate Protection":
283,669 filters out of 903,235 were ignored as a
result of this commit.
Side note: There was no measurable difference between
the two configurations above in the performance of
the matching algorithm as reported by the built-in
benchmark tool.
The staticNetFilteringEngine uses token hashes to store/lookup
filters into Map objects.
Before this commit, the tokens were encoded into token hashes
as JS numbers (not exceeding MAX_SAFE_INTEGER) using at most
the 8 first characters of the token.
With this commit, token hashes are now restricted to fit
into 32-bit integers, and are derived from at most the 7 first
characters. This improves filter look-up performance as per
built-in benchmark().
Related commit:
- 69a43e07c4
Using 32 bits of token hash rather than just the 16 lower
bits does help discard more unknown tokens.
Using the default filter lists, the known-token lookup
table is populated by 12,276 entries, out of 65,536, thus
making the case that theoretically there is a lot of
possible tokens which can be discarded.
In practice, running the built-in
staticNetFilteringEngine.benchmark() with default filter
lists, I find that 1,518,929 tokens were skipped out of
4,441,891 extracted tokens, or 34%.
Related commit:
- 3f3a1543ea
The regression was preventing uBO to find from which list a filter
originated. This affected only filters for which the `domain=`
option had multiple hostnames.
Given that all tokens extracted from one single URL are potentially
iterated multiple times in a single URL-matching cycle, it pays to
ignore extracted tokens which are known to not be used anywhere in
the static filtering engine.
The gain in processing a single network request in the static
filtering engine can become especially high when dealing with
long and random-looking URLs, which URLs have a high likelihood
of containing a majority of tokens which are known to not be in
use.
Related commit:
- 99390390fc
The token information available at compile time can be stored
in the filter to be used at match() time. This allows the use of
startsWith() rather than a more costly indexOf() call as a first
quick test to detect mismatches.
Due to how web pages typically load secondary resources and due
to how HNTrieContainer instances are used in uBO, there is a
great likelihood that the result of a previous call to
HNTrieRef.matches() can be reused in a subsequent call.
This has been confirmed by instrumenting HNTrieRef.matches().
Since uBO uses distinct HNTrieContainer instances to either
match against the request or the origin hostnames, this
means a high likelihood of repeated calls to HNTrieRef.matches()
with the same hostname as argument, hence a performance gain
when caching the argument+result -- as despite that
HNTrie.matches() is fast, comparing two short strings is even
faster if this allows to skip HNTrie.matches() altogether.
Performance- and memory-related work. Three more classes have
been created to avoid regex-based filters internally.
Purpose is to enforce filters which have only one single
wildcard in their pattern, a common occurrence. The filter
pattern is split in two literal string segments.
Similar as above, with the added condition that the filter is
hostname-anchored (`||`). The "Wildcard2" variant is a further
specialization to enforce filters where the only wildcard
is immediately preceded by the `^` special character, again
a very common occurrence.
Using two literal string segments in lieu of regexes allows to
quickly detect a mismatch by just testing the first segment.
Additionally, this reduces memory footprint as regexes are
much more expensive memory-wise than plain strings.
These three new filter classes allow to replace the use of
5276 regex-based filters internally with plain string-based
filters.
Often-called isHnAnchored() has been further fine-tuned to
avoid as much work as possible. I have also observed that
using an arrow function for closure-purpose helps measurably
performance, as per built-in benchmark.
The purpose of using a custom base128 encoder is to
convert array buffers into strings, to allow a direct
string-to-array buffer conversion at load time:
string => array buffer
Whereas a JSON array would require an extra step:
JSON array as string => JS array => array buffer
Turns out that the current use of a custom base128 encoding
results in a significantly larger selfie storage usage when
converting array buffers into strings.
Speculation: possibly the browser convert the strings to
save into JSON strings internally. Since the custom base128
encoder is likely to cause the resulting string to contain
a lot of unprintable ASCII characters, these will need to
be escaped when converted to JSON -- escaped characters
occupy more space than non-escaped ones.
Using a sequence of base 64 numbers means only printable
will be present in the output string, hence no escaping
necessary. I have observed significant reduction in
storage usage for selfie purpose.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/528#issuecomment-484408622
Following STrie-related work in above issue, I noticed that a large
number of filters in EasyList were filters which only had to match
against the document origin. For instance, among just the top 10
most populous buckets, there were four such buckets with over
hundreds of entries each:
- bits: 72, token: "http", 146 entries
- bits: 72, token: "https", 139 entries
- bits: 88, token: "http", 122 entries
- bits: 88, token: "https", 118 entries
These filters in these buckets have to be matched against all
the network requests.
In order to leverage HNTrie for these filters[1], they are now handled
in a special way so as to ensure they all end up in a single HNTrie
(per bucket), which means that instead of scanning hundreds of entries
per URL, there is now a single scan per bucket per URL for these
apply-everywhere filters.
Now, any filter which fulfill ALL the following condition will be
processed in a special manner internally:
- Is of the form `|https://` or `|http://` or `*`; and
- Does have a `domain=` option; and
- Does not have a negated domain in its `domain=` option; and
- Does not have `csp=` option; and
- Does not have a `redirect=` option
If a filter does not fulfill ALL the conditions above, no change
in behavior.
A filter which matches ALL of the above will be processed in a special
manner:
- The `domain=` option will be decomposed so as to create as many
distinct filter as there is distinct value in the `domain=` option
- This also apply to the `badfilter` version of the filter, which
means it now become possible to `badfilter` only one of the
distinct filter without having to `badfilter` all of them.
- The logger will always report these special filters with only a
single hostname in the `domain=` option.
***
[1] HNTrie is currently WASM-ed on Firefox.
As a development tool for investigation purpose. To use, enter the
following at uBO's dev console:
µBlock.staticNetFilteringEngine.filterClassHistogram()
In the static network filtering engine, `google` token is too
generic and probably leads to too many false positives, beside
causing too large filter bucket.
Implement a plain string trie container class: STrieContainer.
Make use of STrieContainer where beneficial
Some filter buckets can grow quite large, and in such case
coalescing "trieable" filter classes into a single trie reduces
lookup performance and memory usage.
For instance, at time of commit, the filter bucket for the
`ad` keyword contains 919 entries[1].
Coalescing trieable filters of the same class into a single plain
string trie reduced the size of the bucket into 50 entries + two
tries which are scanned only once each whenever the bucket is
visited.
[1] Enter the following code at uBO's dev console:
µBlock.staticNetFilteringEngine.categories.get(0).get(µBlock.urlTokenizer.tokenHashFromString('ad'))
Refactor static network filtering engine code to make use of
ES6's syntactic sugar `class`.
Change first auto-update run from 7 to 5 minutes.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/416
The Chromium version of uBO has declared `unlimitedStorage` since the
extension was first published in 2014. Declaring this permission in
Firefox brings uBO inline with the Chromium version. I suspect some
reported errors could be caused by IndexedDB eviction due to the lack
of `unlimitedStorage` permission.
Additionally, a timeout has been added when uBO tries to access its
indexedDB storage. It's unclear whether this will help with the
mentioned related issue though, the root cause is still to be
identified.
Relocate workaround to the code responsible to compute filtering context, such
that the workaround will also be applied in other code paths, for example also
for webRequest.onHeadersReceived.
With the new PSL implementation, benchmarks do not show benefit
from caching the domain extracted from a hostname for later
reuse -- the caching seems to even add an overhead instead with
the new publicSuffixList implementation.
Default behavior is to fall back to an alternative backend
if the uBO-selected one is not available. However there will be
no fall back when the `cacheStorageAPI` is set to one specific
backend by the user.
The value of `suspendTabsUntilReady` was disregarded in Firefox and
uBO defaulted to always defer tab loading until it was ready.
This commit allows to disable the deferring of tab loading in
Firefox. The new valid values for `suspendTabsUntilReady` are:
- `unset`: leave it to the platform to pick the optimal
behavior (default)
- `no`: do no suspend tab loading at launch time
- `yes`: suspend tab loading at launch time
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/409
By default `indexedDB` is used in Firefox for purpose of cache storage
backend.
This commit allows to force the use of `browser.storage.local` instead
as cache storage backend. For this to happen, set `cacheStorageAPI` to
`browser.storage.local` in advanced settings.
Additionally, should `indexedDB` not be available for whatever reason,
uBO will automatically fallback to `browser.storage.local`.
These filters are to be considered obsolete since they can't be
matched against network requests in the webRequest API.
They were probably meant to work when ABP was pre-webext, which
means they are quite probably obsolete and there is no longer
a point for uBO to conveniently translate them into CSP directives.
This removes the derivation of FilterOrigin flavors from
FilterOrigin itself and simplify code paths. FilterOrigin
flavors are small specialized classes, no need to
overcomplicate with derivation.
Specifically, this removes an indirect call to reach the
match() method.
As seen at:
https://whotracks.me/blog/adblockers_performance_study.html
The requests.json.gz file can be downloaded from:
https://cdn.cliqz.com/adblocking/requests_top500.json.gz
Copy the file into ./tmp/requests.json.gz
If the file is present when you build uBO using `make-[target].sh` from
the shell, the resulting package will contain `./assets/requests.json`,
which will be looked-up by the method below to launch a benchmark
session.
From uBO's dev console, launch the benchmark:
µBlock.staticNetFilteringEngine.benchmark();
The usual browser dev tools can be used to obtain useful profiling
data, i.e. start the profiler, call the benchmark method from the
console, then stop the profiler when it completes.
Keep in mind that the measurements at the blog post above where obtained
with ONLY EasyList. The CPU reportedly used was:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-6600U+%40+2.60GHz&id=2608
Rename ./tmp/requests.json.gz to something else if you no longer want
./assets/requests.json in the build.
The motivation is to address the higher peak memory usage at launch
time with 3rd-gen HNTrie when a selfie was present.
The selfie generation prior to this change was to collect all
filtering data into a single data structure, and then to serialize
that whole structure at once into storage (using JSON.stringify).
However, HNTrie serialization requires that a large UintArray32 be
converted into a plain JS array, which itslef would be indirectly
converted into a JSON string. This was the main reason why peak
memory usage would be higher at launch from selfie, since the JSON
string would need to be wholly unserialized into JS objects, which
themselves would need to be converted into more specialized data
structures (like that Uint32Array one).
The solution to lower peak memory usage at launch is to refactor
selfie generation to allow a more piecemeal approach: each filtering
component is given the ability to serialize itself rather than to be
forced to be embedded in the master selfie. With this approach, the
HNTrie buffer can now serialize to its own storage by converting the
buffer data directly into a string which can be directly sent to
storage. This avoiding expensive intermediate steps such as
converting into a JS array and then to a JSON string.
As part of the refactoring, there was also opportunistic code
upgrade to ES6 and Promise (eventually all of uBO's code will be
proper ES6).
Additionally, the polyfill to bring getBytesInUse() to Firefox has
been revisited to replace the rather expensive previous
implementation with an implementation with virtually no overhead.