Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/1365
This commit adds the compiled magic version number to the
compiled data itself, and consequently this allows uBO
to no longer require that any given compiled list with a
mismatched format to be detected and discarded at launch
time.
Given this change, uBO no longer needs to rely on the
deletion of cached data at launch time to ensure it
won't use no longer valid compiled lists.
filterUnits is now treated as a buffer which is
pre-allocated and which will grow in chunks so as
to minimize memory allocations. Entries are never
released, just null-ed.
Additionally, move urlTokenizer into the static
network filtering engine, since it's not used
anywhere else.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/760
The purpose of this new network filter option is to remove
query parameters form the URL of network requests.
The name `queryprune` has been picked over `querystrip`
since the purpose of the option is to remove some
parameters from the URL rather than all parameters.
`queryprune` is a modifier option (like `csp`) in that it
does not cause a network request to be blocked but rather
modified before being emitted.
`queryprune` must be assigned a value, which value will
determine which parameters from a query string will be
removed. The syntax for the value is that of regular
expression *except* for the following rules:
- do not wrap the regex directive between `/`
- do not use regex special values `^` and `$`
- do not use literal comma character in the value,
though you can use hex-encoded version, `\x2c`
- to match the start of a query parameter, prepend `|`
- to match the end of a query parameter, append `|`
`queryprune` regex-like values will be tested against each
key-value parameter pair as `[key]=[value]` string. This
way you can prune according to either the key, the value,
or both.
This commit introduces the concept of modifier filter
options, which as of now are:
- `csp=`
- `queryprune=`
They both work in similar way when used with `important`
option or when used in exception filters. Modifier
options can apply to any network requests, hence the
logger reports the type of the network requests, and no
longer use the modifier as the type, i.e. `csp` filters
are no longer reported as requests of type `csp`.
Though modifier options can apply to any network requests,
for the time being the `csp=` modifier option still apply
only to top or embedded (frame) documents, just as before.
In some future we may want to apply `csp=` directives to
network requests of type script, to control the behavior
of service workers for example.
A new built-in filter expression has been added to the
logger: "modified", which allow to see all the network
requests which were modified before being emitted. The
translation work for this new option will be available
in a future commit.
Additionally, add a button in the About pane
to launch benchmark sessions. The button will
be available only when advanced setting
`benchmarkDatasetURL` is set and pointing to
a valid dataset.
Cloud storage is a limited resource, and thus it
makes sense to support data compression before
sending the data to cloud storage.
A new hidden setting allows to toggle on
cloud storage compression:
name: cloudStorageCompression
default: false
By default, this hidden setting is `false`, and a
user must set it to `true` to enable compression
of cloud storage items.
This hidden setting will eventually be toggled
to `true` by default, when there is good confidence
a majority of users are using a version of uBO
which can properly handle compressed cloud storage
items.
A cursory assessment shows that compressed items
are roughly 40-50% smaller in size.
Related commit:
- https://github.com/gorhill/uBlock/commit/703c525b01aa
This adds an indentation requirement for line
continuation to take place. The conditions are now
as follow:
- Current line ends with ` \`: ASCII space + backslash
- Next line starts with ` `: four ASCII spaces
New advanced setting: `benchmarkDatasetURL`
Default value: `unset`
To specify a URL from where the benchmark dataset will be
fetched. This allows to launch benchmark operations from
within published versions of uBO, rather than from just
a locally built version.
Related feedback:
- https://www.reddit.com/r/uBlockOrigin/comments/dzw57l/
Each token requires two slots in the token indices
array. This commit fixes uBO breaking when dealing
with very long URLs with lot of distinct tokens in
them.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/761
Changes related to above issue made it possible to
create WASM versions of methods used in the bidi-trie.
In this commit, WASM versions for startsWith(), indexOf()
and lastIndexOf() have been implemented.
Related issues:
- https://github.com/uBlockOrigin/uBlock-issues/issues/761
- https://github.com/uBlockOrigin/uBlock-issues/issues/528
The previous bidi-trie code could only hold filters which
are plain pattern, i.e. no wildcard characters, and which
had no origin option (`domain=`), right and/or left anchor,
and no `csp=` option.
Example of filters that could be moved into a bidi-trie
data structure:
&ad_box_
/w/d/capu.php?z=$script,third-party
||liveonlinetv247.com/images/muvixx-150x50-watch-now-in-hd-play-btn.gif
Examples of filters that could NOT be moved to a bidi-trie:
-adap.$domain=~l-adap.org
/tsc.php?*&ses=
||ibsrv.net/*forumsponsor$domain=[...]
@@||imgspice.com/jquery.cookie.js|$script
||view.atdmt.com^*/iview/$third-party
||postimg.cc/image/$csp=[...]
Ideally the filters above should be able to be moved to a
bidi-trie since they are basically plain patterns, or at
least partially moved to a bidi-trie when there is only a
single wildcard (i.e. made of two plain patterns).
Also, there were two distinct bidi-tries in which
plain-pattern filters can be moved to: one for patterns
without hostname anchoring and another one for patterns
with hostname-anchoring. This was required because the
hostname-anchored patterns have an extra condition which
is outside the bidi-trie knowledge.
This commit expands the number of filters which can be
stored in the bidi-trie, and also remove the need to
use two distinct bidi-tries.
- Added ability to associate a pattern with an integer
in the bidi-trie [1].
- The bidi-trie match code passes this externally
provided integer when calling an externally
provided method used for testing extra conditions
that may be present for a plain pattern found to
be matching in the bidi-trie.
- Decomposed existing filters into smaller logical units:
- FilterPlainLeftAnchored =>
FilterPatternPlain +
FilterAnchorLeft
- FilterPlainRightAnchored =>
FilterPatternPlain +
FilterAnchorRight
- FilterExactMatch =>
FilterPatternPlain +
FilterAnchorLeft +
FilterAnchorRight
- FilterPlainHnAnchored =>
FilterPatternPlain +
FilterAnchorHn
- FilterWildcard1 =>
FilterPatternPlain + [
FilterPatternLeft or
FilterPatternRight
]
- FilterWildcard1HnAnchored =>
FilterPatternPlain + [
FilterPatternLeft or
FilterPatternRight
] +
FilterAnchorHn
- FilterGenericHnAnchored =>
FilterPatternGeneric +
FilterAnchorHn
- FilterGenericHnAndRightAnchored =>
FilterPatternGeneric +
FilterAnchorRight +
FilterAnchorHn
- FilterOriginMixedSet =>
FilterOriginMissSet +
FilterOriginHitSet
- Instances of FilterOrigin[...], FilterDataHolder
can also be added to a composite filter to
represent `domain=` and `csp=` options.
- Added a new filter class, FilterComposite, for
filters which are a combination of two or more
logical units. A FilterComposite instance is a
match when *all* filters composing it are a
match.
Since filters are now encoded into combination of
smaller units, it becomes possible to extract the
FilterPatternPlain component and store it in the
bidi-trie, and use the integer as a handle for the
remaining extra conditions, if any.
Since a single pattern in the bidi-trie may be a
component for different filters, the associated
integer points to a sequence of extra conditions,
and a match occurs as soon as one of the extra
conditions (which may itself be a sequence of
conditions) is fulfilled.
Decomposing filters which are currently single
instance into sequences of smaller logical filters
means increasing the storage and CPU overhead when
evaluating such filters. The CPU overhead is
compensated by the fact that more filters can now
moved into the bidi-trie, where the first match is
efficiently evaluated. The extra conditions have to
be evaluated if and only if there is a match in the
bidi-trie.
The storage overhead is compensated by the
bidi-trie's intrinsic nature of merging similar
patterns.
Furthermore, the storage overhead is reduced by no
longer using JavaScript array to store collection
of filters (which is what FilterComposite is):
the same technique used in [2] is imported to store
sequences of filters.
A sequence of filters is a sequence of integer pairs
where the first integer is an index to an actual
filter instance stored in a global array of filters
(`filterUnits`), while the second integer is an index
to the next pair in the sequence -- which means all
sequences of filters are encoded in one single array
of integers (`filterSequences` => Uint32Array). As
a result, a sequence of filters can be represented by
one single integer -- an index to the first pair --
regardless of the number of filters in the sequence.
This representation is further leveraged to replace
the use of JavaScript array in FilterBucket [3],
which used a JavaScript array to store collection
of filters. Doing so means there is no more need for
FilterPair [4], which purpose was to be a lightweight
representation when there was only two filters in a
collection.
As a result of the above changes, the map of `token`
(integer) => filter instance (object) used to
associate tokens to filters or collections of filters
is replaced with a more efficient map of `token`
(integer) to filter unit index (integer) to lookup a
filter object from the global `filterUnits` array.
Another consequence of using one single global
array to store all filter instances means we can reuse
existing instances when a logical filter instance is
parameter-less, which is the case for FilterAnchorLeft,
FilterAnchorRight, FilterAnchorHn, the index to these
single instances is reused where needed.
`urlTokenizer` now stores the character codes of the
scanned URL into a bidi-trie buffer, for reuse when
string matching methods are called.
New method: `tokenHistogram()`, used to generate
histograms of occurrences of token extracted from URLs
in built-in benchmark. The top results of the "miss"
histogram are used as "bad tokens", i.e. tokens to
avoid if possible when compiling filter lists.
All plain pattern strings are now stored in the
bidi-trie memory buffer, regardless of whether they
will be used in the trie proper or not.
Three methods have been added to the bidi-trie to test
stored string against the URL which is also stored in
then bidi-trie.
FilterParser is now instanciated on demand and
released when no longer used.
***
[1] 135a45a878/src/js/strie.js (L120)
[2] e94024d350
[3] 135a45a878/src/js/static-net-filtering.js (L1630)
[4] 135a45a878/src/js/static-net-filtering.js (L1566)
Related documentation:
- https://help.eyeo.com/en/adblockplus/how-to-write-filters#element-hiding
Related feedback/discussion:
- https://www.reddit.com/r/uBlockOrigin/comments/d6vxzj/
The `elemhide` filter option as per ABP semantic is
now supported. Previously uBO would consider `elemhide`
to be an alias of `generichide`.
The support of `elemhide` is through the convenient
conversion of `elemhide` option into existing
`generichide` option and new `specifichide` option.
The purpose of the new `specifichide` filter option
is to disable all specific cosmetic filters, i.e.
those who target a specific site.
Additionally, for convenience purpose, the filter
options `generichide`, `specifichide` and `elemhide`
can be aliased using the shorter forms `ghide`,
`shide` and `ehide` respectively.
Related feedback:
- https://www.reddit.com/r/uBlockOrigin/comments/cmh910/
Additionally, the `3p` rule has been made distinct from
`3p-script`/`3p-frame` for the purpose of
"Relax blocking mode" command.
The badge color will hint at the current blocking mode.
There are four colors for the four following blocking
modes:
- JavaScript wholly disabled
- All 3rd parties blocked
- 3rd-party scripts and frames blocked
- None of the above
The default badge color will be used when JavaScript is not
wholly disabled and when there are no rules for `3p`,
`3p-script` or `3p-frame`.
A new advanced setting has been added to let the user choose
the badge colors for the various blocking modes,
`blockingProfileColors`. The value *must* be a sequence of
4 valid CSS color values that match 6 hexadecimal digits
prefixed with`#` -- anything else will be ignored.
The staticNetFilteringEngine uses token hashes to store/lookup
filters into Map objects.
Before this commit, the tokens were encoded into token hashes
as JS numbers (not exceeding MAX_SAFE_INTEGER) using at most
the 8 first characters of the token.
With this commit, token hashes are now restricted to fit
into 32-bit integers, and are derived from at most the 7 first
characters. This improves filter look-up performance as per
built-in benchmark().
Related commit:
- 69a43e07c4
Using 32 bits of token hash rather than just the 16 lower
bits does help discard more unknown tokens.
Using the default filter lists, the known-token lookup
table is populated by 12,276 entries, out of 65,536, thus
making the case that theoretically there is a lot of
possible tokens which can be discarded.
In practice, running the built-in
staticNetFilteringEngine.benchmark() with default filter
lists, I find that 1,518,929 tokens were skipped out of
4,441,891 extracted tokens, or 34%.
Given that all tokens extracted from one single URL are potentially
iterated multiple times in a single URL-matching cycle, it pays to
ignore extracted tokens which are known to not be used anywhere in
the static filtering engine.
The gain in processing a single network request in the static
filtering engine can become especially high when dealing with
long and random-looking URLs, which URLs have a high likelihood
of containing a majority of tokens which are known to not be in
use.
The purpose of using a custom base128 encoder is to
convert array buffers into strings, to allow a direct
string-to-array buffer conversion at load time:
string => array buffer
Whereas a JSON array would require an extra step:
JSON array as string => JS array => array buffer
Turns out that the current use of a custom base128 encoding
results in a significantly larger selfie storage usage when
converting array buffers into strings.
Speculation: possibly the browser convert the strings to
save into JSON strings internally. Since the custom base128
encoder is likely to cause the resulting string to contain
a lot of unprintable ASCII characters, these will need to
be escaped when converted to JSON -- escaped characters
occupy more space than non-escaped ones.
Using a sequence of base 64 numbers means only printable
will be present in the output string, hence no escaping
necessary. I have observed significant reduction in
storage usage for selfie purpose.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/528#issuecomment-484408622
Following STrie-related work in above issue, I noticed that a large
number of filters in EasyList were filters which only had to match
against the document origin. For instance, among just the top 10
most populous buckets, there were four such buckets with over
hundreds of entries each:
- bits: 72, token: "http", 146 entries
- bits: 72, token: "https", 139 entries
- bits: 88, token: "http", 122 entries
- bits: 88, token: "https", 118 entries
These filters in these buckets have to be matched against all
the network requests.
In order to leverage HNTrie for these filters[1], they are now handled
in a special way so as to ensure they all end up in a single HNTrie
(per bucket), which means that instead of scanning hundreds of entries
per URL, there is now a single scan per bucket per URL for these
apply-everywhere filters.
Now, any filter which fulfill ALL the following condition will be
processed in a special manner internally:
- Is of the form `|https://` or `|http://` or `*`; and
- Does have a `domain=` option; and
- Does not have a negated domain in its `domain=` option; and
- Does not have `csp=` option; and
- Does not have a `redirect=` option
If a filter does not fulfill ALL the conditions above, no change
in behavior.
A filter which matches ALL of the above will be processed in a special
manner:
- The `domain=` option will be decomposed so as to create as many
distinct filter as there is distinct value in the `domain=` option
- This also apply to the `badfilter` version of the filter, which
means it now become possible to `badfilter` only one of the
distinct filter without having to `badfilter` all of them.
- The logger will always report these special filters with only a
single hostname in the `domain=` option.
***
[1] HNTrie is currently WASM-ed on Firefox.
The motivation is to address the higher peak memory usage at launch
time with 3rd-gen HNTrie when a selfie was present.
The selfie generation prior to this change was to collect all
filtering data into a single data structure, and then to serialize
that whole structure at once into storage (using JSON.stringify).
However, HNTrie serialization requires that a large UintArray32 be
converted into a plain JS array, which itslef would be indirectly
converted into a JSON string. This was the main reason why peak
memory usage would be higher at launch from selfie, since the JSON
string would need to be wholly unserialized into JS objects, which
themselves would need to be converted into more specialized data
structures (like that Uint32Array one).
The solution to lower peak memory usage at launch is to refactor
selfie generation to allow a more piecemeal approach: each filtering
component is given the ability to serialize itself rather than to be
forced to be embedded in the master selfie. With this approach, the
HNTrie buffer can now serialize to its own storage by converting the
buffer data directly into a string which can be directly sent to
storage. This avoiding expensive intermediate steps such as
converting into a JS array and then to a JSON string.
As part of the refactoring, there was also opportunistic code
upgrade to ES6 and Promise (eventually all of uBO's code will be
proper ES6).
Additionally, the polyfill to bring getBytesInUse() to Firefox has
been revisited to replace the rather expensive previous
implementation with an implementation with virtually no overhead.
A new filtering class has been created: "static extended filtering".
This new class is an umbrella class for more specialized filtering
engines:
- Cosmetic filtering
- Scriptlet filtering
- HTML filtering
HTML filtering is available only on platforms which support modifying
the response body on the fly, so only Firefox 57+ at the moment.
With the ability to modify the response body, HTML filtering has
been introduced: removing elements from the DOM before the source
data has been parsed by the browser.
A consequence of HTML filtering ability is to bring back script tag
filtering feature.
- badfilter option was no longer working following last refactoring
changes.
- performance work:
- reduce duplication of large strings.
- new lighter FilterBucket to use when only 2 filters: FilterPair.