To be used at the console, as an investigation tool for
development purpose.
Using it to verify the content of the largest
FilterHostnameDict instance, I spotted an all-uppercase
hostname in the HNTrieRef instance:
µBlock.staticNetFilteringEngine.categories.get(0).get(0x10000000).dict.dump();
Thus the changes to static-net-filtering.js are to fix
the erroneous insertion of filters with uppercase
characters. The single instance found was a hostname entry
in Malware Domain List (TRIANGLESERVICESLTD dot COM).
The `null` placeholder are not necessary, we can just use
default arguments instead, and add the HNTrieContainer
references if and only if they are instanciated.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/550
Related Chromium issue (I can't access it):
- https://bugs.chromium.org/p/chromium/issues/detail?id=957866
Findings so far: affects browsers based on Chromium 74.
I could not reproduce the issue with either Chromium 73 or
Google Chrome 75.
This commit is a mitigation: to prevent sites from using
uBO's internal WAR secret for tracking purpose. A secret
can be used for at most one second, after which a new secret
is generated.
The original issue related to the implementation of
secret-gated web accessible resources is:
- https://github.com/gorhill/uBlock/issues/2823
The purpose of this new `:nth-ancestor(n)` operator is to
lookup the nth ancestor relative to the currently selected
node.
It is essentially equivalent to `:xpath(..)`, where
ancestor distance is expressed as a number rather than a
sequence of slash-separated `..`.
The rationale to introduce this new procedural selector
is to have a low overhead way to accomplish ancestor
selection.
This commit implements the alphabetical ordering of HNTrie
nodes, so as to make it possible to bail out early at
HNTrie.matches() time.
Contrary to what I expected, there is no performance gain
observed to HNTrie.matches() as per benchmarks -- I find
the results perplexing.
Because of this I will revert this commit immediately.
The purpose of this commit is to record the changes so
that I can bring them back to life in the future whenever
I want to investigate further.
Consider the two following filters:
example.com
www.example.com
This commit make it so that if the first filter is
already present in a given HNTrie, the second filter
will not be stored, since HNTrie will _always_
return the first filter as a match whenever the
hostname to match is example.com or any subdomain
of example.com.
The detection of such pointless filters is
virtually free when adding a hostname to an HNTrie
instance (given how data is stored in the trie), so
in practice no overhead is incurred to detect such
pointless filters.
The ability to ignore impossible to match filters
in HNTrie instances will _especially_ benefit those
using large hosts files.
Examples of how this helps using real configurations:
- Default lists:
444 filters out of 100,382 were ignored as a result
of this commit.
- Default lists + "Energized Ultimate Protection":
283,669 filters out of 903,235 were ignored as a
result of this commit.
Side note: There was no measurable difference between
the two configurations above in the performance of
the matching algorithm as reported by the built-in
benchmark tool.
The staticNetFilteringEngine uses token hashes to store/lookup
filters into Map objects.
Before this commit, the tokens were encoded into token hashes
as JS numbers (not exceeding MAX_SAFE_INTEGER) using at most
the 8 first characters of the token.
With this commit, token hashes are now restricted to fit
into 32-bit integers, and are derived from at most the 7 first
characters. This improves filter look-up performance as per
built-in benchmark().
Related commit:
- 69a43e07c4
Using 32 bits of token hash rather than just the 16 lower
bits does help discard more unknown tokens.
Using the default filter lists, the known-token lookup
table is populated by 12,276 entries, out of 65,536, thus
making the case that theoretically there is a lot of
possible tokens which can be discarded.
In practice, running the built-in
staticNetFilteringEngine.benchmark() with default filter
lists, I find that 1,518,929 tokens were skipped out of
4,441,891 extracted tokens, or 34%.
Related commit:
- 3f3a1543ea
The regression was preventing uBO to find from which list a filter
originated. This affected only filters for which the `domain=`
option had multiple hostnames.
Given that all tokens extracted from one single URL are potentially
iterated multiple times in a single URL-matching cycle, it pays to
ignore extracted tokens which are known to not be used anywhere in
the static filtering engine.
The gain in processing a single network request in the static
filtering engine can become especially high when dealing with
long and random-looking URLs, which URLs have a high likelihood
of containing a majority of tokens which are known to not be in
use.
Related commit:
- 99390390fc
The token information available at compile time can be stored
in the filter to be used at match() time. This allows the use of
startsWith() rather than a more costly indexOf() call as a first
quick test to detect mismatches.
Due to how web pages typically load secondary resources and due
to how HNTrieContainer instances are used in uBO, there is a
great likelihood that the result of a previous call to
HNTrieRef.matches() can be reused in a subsequent call.
This has been confirmed by instrumenting HNTrieRef.matches().
Since uBO uses distinct HNTrieContainer instances to either
match against the request or the origin hostnames, this
means a high likelihood of repeated calls to HNTrieRef.matches()
with the same hostname as argument, hence a performance gain
when caching the argument+result -- as despite that
HNTrie.matches() is fast, comparing two short strings is even
faster if this allows to skip HNTrie.matches() altogether.
Performance- and memory-related work. Three more classes have
been created to avoid regex-based filters internally.
Purpose is to enforce filters which have only one single
wildcard in their pattern, a common occurrence. The filter
pattern is split in two literal string segments.
Similar as above, with the added condition that the filter is
hostname-anchored (`||`). The "Wildcard2" variant is a further
specialization to enforce filters where the only wildcard
is immediately preceded by the `^` special character, again
a very common occurrence.
Using two literal string segments in lieu of regexes allows to
quickly detect a mismatch by just testing the first segment.
Additionally, this reduces memory footprint as regexes are
much more expensive memory-wise than plain strings.
These three new filter classes allow to replace the use of
5276 regex-based filters internally with plain string-based
filters.
Often-called isHnAnchored() has been further fine-tuned to
avoid as much work as possible. I have also observed that
using an arrow function for closure-purpose helps measurably
performance, as per built-in benchmark.
The purpose of using a custom base128 encoder is to
convert array buffers into strings, to allow a direct
string-to-array buffer conversion at load time:
string => array buffer
Whereas a JSON array would require an extra step:
JSON array as string => JS array => array buffer
Turns out that the current use of a custom base128 encoding
results in a significantly larger selfie storage usage when
converting array buffers into strings.
Speculation: possibly the browser convert the strings to
save into JSON strings internally. Since the custom base128
encoder is likely to cause the resulting string to contain
a lot of unprintable ASCII characters, these will need to
be escaped when converted to JSON -- escaped characters
occupy more space than non-escaped ones.
Using a sequence of base 64 numbers means only printable
will be present in the output string, hence no escaping
necessary. I have observed significant reduction in
storage usage for selfie purpose.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/528#issuecomment-484408622
Following STrie-related work in above issue, I noticed that a large
number of filters in EasyList were filters which only had to match
against the document origin. For instance, among just the top 10
most populous buckets, there were four such buckets with over
hundreds of entries each:
- bits: 72, token: "http", 146 entries
- bits: 72, token: "https", 139 entries
- bits: 88, token: "http", 122 entries
- bits: 88, token: "https", 118 entries
These filters in these buckets have to be matched against all
the network requests.
In order to leverage HNTrie for these filters[1], they are now handled
in a special way so as to ensure they all end up in a single HNTrie
(per bucket), which means that instead of scanning hundreds of entries
per URL, there is now a single scan per bucket per URL for these
apply-everywhere filters.
Now, any filter which fulfill ALL the following condition will be
processed in a special manner internally:
- Is of the form `|https://` or `|http://` or `*`; and
- Does have a `domain=` option; and
- Does not have a negated domain in its `domain=` option; and
- Does not have `csp=` option; and
- Does not have a `redirect=` option
If a filter does not fulfill ALL the conditions above, no change
in behavior.
A filter which matches ALL of the above will be processed in a special
manner:
- The `domain=` option will be decomposed so as to create as many
distinct filter as there is distinct value in the `domain=` option
- This also apply to the `badfilter` version of the filter, which
means it now become possible to `badfilter` only one of the
distinct filter without having to `badfilter` all of them.
- The logger will always report these special filters with only a
single hostname in the `domain=` option.
***
[1] HNTrie is currently WASM-ed on Firefox.
As a development tool for investigation purpose. To use, enter the
following at uBO's dev console:
µBlock.staticNetFilteringEngine.filterClassHistogram()
In the static network filtering engine, `google` token is too
generic and probably leads to too many false positives, beside
causing too large filter bucket.
Implement a plain string trie container class: STrieContainer.
Make use of STrieContainer where beneficial
Some filter buckets can grow quite large, and in such case
coalescing "trieable" filter classes into a single trie reduces
lookup performance and memory usage.
For instance, at time of commit, the filter bucket for the
`ad` keyword contains 919 entries[1].
Coalescing trieable filters of the same class into a single plain
string trie reduced the size of the bucket into 50 entries + two
tries which are scanned only once each whenever the bucket is
visited.
[1] Enter the following code at uBO's dev console:
µBlock.staticNetFilteringEngine.categories.get(0).get(µBlock.urlTokenizer.tokenHashFromString('ad'))
Refactor static network filtering engine code to make use of
ES6's syntactic sugar `class`.
Change first auto-update run from 7 to 5 minutes.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/416
The Chromium version of uBO has declared `unlimitedStorage` since the
extension was first published in 2014. Declaring this permission in
Firefox brings uBO inline with the Chromium version. I suspect some
reported errors could be caused by IndexedDB eviction due to the lack
of `unlimitedStorage` permission.
Additionally, a timeout has been added when uBO tries to access its
indexedDB storage. It's unclear whether this will help with the
mentioned related issue though, the root cause is still to be
identified.
Relocate workaround to the code responsible to compute filtering context, such
that the workaround will also be applied in other code paths, for example also
for webRequest.onHeadersReceived.
With the new PSL implementation, benchmarks do not show benefit
from caching the domain extracted from a hostname for later
reuse -- the caching seems to even add an overhead instead with
the new publicSuffixList implementation.
Default behavior is to fall back to an alternative backend
if the uBO-selected one is not available. However there will be
no fall back when the `cacheStorageAPI` is set to one specific
backend by the user.
The value of `suspendTabsUntilReady` was disregarded in Firefox and
uBO defaulted to always defer tab loading until it was ready.
This commit allows to disable the deferring of tab loading in
Firefox. The new valid values for `suspendTabsUntilReady` are:
- `unset`: leave it to the platform to pick the optimal
behavior (default)
- `no`: do no suspend tab loading at launch time
- `yes`: suspend tab loading at launch time
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/409
By default `indexedDB` is used in Firefox for purpose of cache storage
backend.
This commit allows to force the use of `browser.storage.local` instead
as cache storage backend. For this to happen, set `cacheStorageAPI` to
`browser.storage.local` in advanced settings.
Additionally, should `indexedDB` not be available for whatever reason,
uBO will automatically fallback to `browser.storage.local`.
These filters are to be considered obsolete since they can't be
matched against network requests in the webRequest API.
They were probably meant to work when ABP was pre-webext, which
means they are quite probably obsolete and there is no longer
a point for uBO to conveniently translate them into CSP directives.
This removes the derivation of FilterOrigin flavors from
FilterOrigin itself and simplify code paths. FilterOrigin
flavors are small specialized classes, no need to
overcomplicate with derivation.
Specifically, this removes an indirect call to reach the
match() method.
As seen at:
https://whotracks.me/blog/adblockers_performance_study.html
The requests.json.gz file can be downloaded from:
https://cdn.cliqz.com/adblocking/requests_top500.json.gz
Copy the file into ./tmp/requests.json.gz
If the file is present when you build uBO using `make-[target].sh` from
the shell, the resulting package will contain `./assets/requests.json`,
which will be looked-up by the method below to launch a benchmark
session.
From uBO's dev console, launch the benchmark:
µBlock.staticNetFilteringEngine.benchmark();
The usual browser dev tools can be used to obtain useful profiling
data, i.e. start the profiler, call the benchmark method from the
console, then stop the profiler when it completes.
Keep in mind that the measurements at the blog post above where obtained
with ONLY EasyList. The CPU reportedly used was:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-6600U+%40+2.60GHz&id=2608
Rename ./tmp/requests.json.gz to something else if you no longer want
./assets/requests.json in the build.
The motivation is to address the higher peak memory usage at launch
time with 3rd-gen HNTrie when a selfie was present.
The selfie generation prior to this change was to collect all
filtering data into a single data structure, and then to serialize
that whole structure at once into storage (using JSON.stringify).
However, HNTrie serialization requires that a large UintArray32 be
converted into a plain JS array, which itslef would be indirectly
converted into a JSON string. This was the main reason why peak
memory usage would be higher at launch from selfie, since the JSON
string would need to be wholly unserialized into JS objects, which
themselves would need to be converted into more specialized data
structures (like that Uint32Array one).
The solution to lower peak memory usage at launch is to refactor
selfie generation to allow a more piecemeal approach: each filtering
component is given the ability to serialize itself rather than to be
forced to be embedded in the master selfie. With this approach, the
HNTrie buffer can now serialize to its own storage by converting the
buffer data directly into a string which can be directly sent to
storage. This avoiding expensive intermediate steps such as
converting into a JS array and then to a JSON string.
As part of the refactoring, there was also opportunistic code
upgrade to ES6 and Promise (eventually all of uBO's code will be
proper ES6).
Additionally, the polyfill to bring getBytesInUse() to Firefox has
been revisited to replace the rather expensive previous
implementation with an implementation with virtually no overhead.
The Promise chain was not properly designed for WASM module
loading. This became apparent when removing WASM modules
from Opera build[1].
The problem was that errors thrown by fetch() -- used to
load WASM modules -- were not properly handled.
[1] Opera refuses updating uBO if there are unrecognized file
types in the package, and `.wasm`/`.wat` files are not
recognized by Opera uploader.
Related issue:
- https://github.com/NanoAdblocker/NanoCore/issues/239
The erroneous behavior was to compute the URL of a sublist as
relative to the URL of the root list, which may differ from the
URL of a parent list.
Those spurious disconnections have been observed to occur at
uBO's launch time.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/403
I have observed that this fixes an issue observed on Firefox 64
(current stable).
The reported Waterfox issue *may* be fixed as a result. If not,
the issue he still considered fixed as Waterfox is not
officially supported.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/399
The advanced setting `cacheStorageAPI` has been added to allow
a user to force the use of IndexedDB as cache storage. Set to
`IndexedDB` to force use of IndexedDB. Default to `unset`.
Performance-related work: the logger data has been decoupled
from the DOM -- inspired from CodeMirror's way of efficiently
handling large amout of text data.
This decoupling now makes the logger highly efficient CPU- and
memory-wise, and open the way to more possibilities.
Ability to configure some aspect of the logger behavior and
visuals:
- The hard-coded limit of 5000 entries has been
removed and is now replaced with a variety of
user-configurable settings to enforce the discarding of
logger entries.
- Some columns in the logger output can now be hidden.
The filter list look-up feature has been merged into the
existing overlay dialog used to create URL rules or static
filters, as an entry in a new "Details" pane.
Other issues addressed during refactoring:
- https://github.com/uBlockOrigin/uBlock-issues/issues/280
- https://github.com/gorhill/uBlock/issues/1999
The minimum version supported on Firefox has been bumped
up to 55.0.
Related issues:
- https://github.com/uBlockOrigin/uBlock-issues/issues/372
- https://github.com/gorhill/uBlock/issues/93
A new advanced settings has been added: `autoCommentFilterTemplate`.
Default value is `{{date}} {{origin}}`.
Placeholders are identified by `{{...}}`. There are currently
only three placeholders supported:
- `{{date}}`: will be replaced with current date
- `{{time}}`: will be replaced with current time
- `{{origin}}`: will be replaced with site information on which
the filter(s) was created
If no placeholder is found in `autoCommentFilterTemplate`, this
will disable auto-commenting. So one can use `-` to disable
auto-commenting.
Additionally, if auto-commenting is enabled, uBO will not emit a
comment if an emitted comment would be a duplicate of the last
one found in the user filter list.
The DOM surveyor will now use time-based logic to spread its work
over time. This allows the surveying to better scale down on
slower devices.
Additionally, the DOM surveyor code has been reworked to lower as
much as possible memory churning when collating nodes to survey.
This rework has been motivated after profiling the "monstrous DOM"
seen in the following page:
<https://doc.rust-lang.org/std/iter/trait.Iterator.html>
The idea is that making the DOM surveyor efficient on such
"monstrous DOM" case should make it efficient everywhere in
practice.
- Avoid concatenating with empty array: though the concatenated
array is empty, this still forces the creation of a whole new
array as per semantic of Array.prototype.concat().
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat>
- Do not convert arrays to strings when sending data to
main process in surveyPhase1(): I no longer see any benefit
doing so in profiling data (if I recall properly this was
benefiting Firefox, but I can't remember for sure anymore why
I chose to do so back then).
Related issue:
- https://github.com/gorhill/uBlock/issues/3683
This commit further increases uBO's procedural cosmetic filters
Adguard's cosmetic filter syntax -- specifically those procedural
cosmetic filters where plain CSS selectors appeared following
a procedural oeprator (this was rejected as invalid by uBO).
Also, experimental support for `:watch-attrs` procedural
operator, as discussed in <https://github.com/uBlockOrigin/uBlock-issues/issues/341#issuecomment-449765525>.
Support may be dropped before next release depending on whether
a better solution is suggested.
Additionally, the usual opportunistic refactoring toward ES6
syntax.
Related issue:
- https://github.com/gorhill/uBlock/issues/3708
This was brought into the issue above but I ended up forgotting
about it after I focused mostly on the second issue brought up
in there.
commit 7c6cacc59b27660fabacb55d668ef099b222a9e6
Author: Raymond Hill <rhill@raymondhill.net>
Date: Sat Nov 3 08:52:51 2018 -0300
code review: finalize support for wasm-based hntrie
commit 8596ed80e3bdac2c36e3c860b51e7189f6bc8487
Merge: cbe1f2e 000eb82
Author: Raymond Hill <rhill@raymondhill.net>
Date: Sat Nov 3 08:41:40 2018 -0300
Merge branch 'master' of github.com:gorhill/uBlock into trie-wasm
commit cbe1f2e2f38484d42af3204ec7f1b5decd30f99e
Merge: 270fc7f dbb7e80
Author: Raymond Hill <rhill@raymondhill.net>
Date: Fri Nov 2 17:43:20 2018 -0300
Merge branch 'master' of github.com:gorhill/uBlock into trie-wasm
commit 270fc7f9b3b73d79e6355522c1a42ce782fe7e5c
Merge: d2a89cf d693d4f
Author: Raymond Hill <rhill@raymondhill.net>
Date: Fri Nov 2 16:21:08 2018 -0300
Merge branch 'master' of github.com:gorhill/uBlock into trie-wasm
commit d2a89cf28f0816ffd4617c2c7b4ccfcdcc30e1b4
Merge: d7afc78 649f82f
Author: Raymond Hill <rhill@raymondhill.net>
Date: Fri Nov 2 14:54:58 2018 -0300
Merge branch 'master' of github.com:gorhill/uBlock into trie-wasm
commit d7afc78b5f5675d7d34c5a1d0ec3099a77caef49
Author: Raymond Hill <rhill@raymondhill.net>
Date: Fri Nov 2 13:56:11 2018 -0300
finalize wasm-based hntrie implementation
commit e7b9e043cf36ad055791713e34eb0322dec84627
Author: Raymond Hill <rhill@raymondhill.net>
Date: Fri Nov 2 08:14:02 2018 -0300
add first-pass implementation of wasm version of hntrie
commit 1015cb34624f3ef73ace58b58fe4e03dfc59897f
Author: Raymond Hill <rhill@raymondhill.net>
Date: Wed Oct 31 17:16:47 2018 -0300
back up draft work toward experimenting with wasm hntries
<https://github.com/gorhill/uBlock/issues/3436>: a new per-site switch
has been added, no-scripting, which purpose is to wholly disable/enable
javascript for a given site. This new switch has precedence over all
other ways javascript can be disabled, including precedence over dynamic
filtering rules.
The popup panel will report the number of script resources which have
been seen by uBO for the current page. There is a minor inaccuracy to
be fixed regarding the count, and which fix requires to extend request
journaling.
<https://github.com/gorhill/uBlock/issues/308>: the `noscript` tags will
now be respected when the new no-scripting switch is in effect on a given
site.
A default setting has been added to the _Settings_ pane to
disable/enable globally the new no-script switch, such that one can
work in default-deny mode regarding javascript execution.
<https://github.com/uBlockOrigin/uBlock-issues/issues/155>: a new
hidden setting, `requestJournalProcessPeriod`, has been added to
allow controlling the delay before uBO internally process it's
network request journal queue. Default to 1000 (milliseconds).
Squashed commit of the following:
commit 6a8473822537636ac54d5dabdb14472114bb730b
Author: Raymond Hill <rhill@raymondhill.net>
Date: Mon Aug 6 10:56:44 2018 -0400
remove remnant of snappyjs and spurious instruction
commit 9a4b709bee97d3cc2235fab602359fa5953bdb46
Author: Raymond Hill <rhill@raymondhill.net>
Date: Mon Aug 6 09:48:58 2018 -0400
make cache storage compression optionally available on all platforms
New advanced setting: `cacheStorageCompression`. Default is `false`.
commit 22ee6547f2f7c9c5aefe25dea1262a1b31612155
Author: Raymond Hill <rhill@raymondhill.net>
Date: Sun Aug 5 19:16:26 2018 -0400
remove Chromium from lz4 experiment
commit ee3e201c45afe983508f70713a2d43af74737d8d
Author: Raymond Hill <rhill@raymondhill.net>
Date: Sun Aug 5 18:52:43 2018 -0400
import lz4-block-codec.wasm library
commit 883a3118efcfd749c82356fde7134754d6ae371d
Author: Raymond Hill <rhill@raymondhill.net>
Date: Sun Aug 5 18:50:46 2018 -0400
implement storage compression through lz4-wasm [draft]
commit 48d1ccaba407de447c2cd6747dc3a90839c260a7
Merge: 8ae77e6 b34c897
Author: Raymond Hill <rhill@raymondhill.net>
Date: Sat Aug 4 08:56:51 2018 -0400
Merge branch 'master' of github.com:gorhill/uBlock into lz4
commit 8ae77e6aeeaa85af335e664c2560d2afd37288c6
Author: Raymond Hill <rhill@raymondhill.net>
Date: Wed Jul 25 18:17:45 2018 -0400
experiment with compression
potentially causing high-generic cosmetic filters to not be applied
because the MRU cache contains an empty list of high-generic filters
when there is a query from a content script for cosmetic filters
before they are fully loaded and ready.
- collate together specific filters with same base domain
- replace string-based hash to integer-based hash
- revisit code to benefit from ES6-specific syntax
- Default to being detached
- Default to "Current tab"
- Append current tab title to "Current tab" entry
- Avoid iterating through all tabs when no change
* Fix leftovers from old code.
* change changes.procedural.size to changes.procedural.length
changes.procedural is an array so it should be changes.procedural.length
the code works with changes.procedural.size because (undefined !== 0) is always true.
Only resources from within current directory will be allowed,
everything else will be silently rejected.
For example, this will forbid pulling lists from different repos
on GitHub, despite the lists being same origin.
A new filtering class has been created: "static extended filtering".
This new class is an umbrella class for more specialized filtering
engines:
- Cosmetic filtering
- Scriptlet filtering
- HTML filtering
HTML filtering is available only on platforms which support modifying
the response body on the fly, so only Firefox 57+ at the moment.
With the ability to modify the response body, HTML filtering has
been introduced: removing elements from the DOM before the source
data has been parsed by the browser.
A consequence of HTML filtering ability is to bring back script tag
filtering feature.
* dom-inspector: Improvments
- Fix race between userCSS injection and element highlight resulting in none or not all elements highlighted.
- Fix page being scanned twice resulting in unneeded slowdown.
* dom-inspector: Clear mutationTimer to allow more than one update.
* dom-inspector: Fix procedural filters shown as declarative with expando.