The same filter infrastructure that can be applied to image URLS now
also works for manga chapters and other delegated URLs.
TODO: actually provide any metadata (currently supported is only
deviantart and imagefap).
This allows for image filtering via Python expressions by the same
metadata that is also used to build filenames (--list-keywords).
The usually shunned eval() function is used to evaluate
filter-expressions, but it seemed quite appropriate in this case and
shouldn't introduce any new security issues, as any attacker that could do
> gallery-dl --filter "delete-everything()" ...
could as well do
> python -c "delete-everything()"
Duplicate URLs might occur if, for example, an artist adds another
image to his gallery while an extractor is running and images are being
downloaded on sites like pixiv/nijie/hentaifoundry.
The next image on the next page will have already been downloaded and
will cause a premature end if '--abort-on-skip' is being used.
- ask user to report unexpected errors, which usually indicate
extractor failure
- handle OSErrors separately (permissions, disk full, etc)
- revert 30eef52
There was never any "good" reason for the strict separation
between extractors and downloaders. This change allows for
reduced resource usage (probably unnoticeable) and less lines
of code at the "cost" of tighter coupling.
reddit extractors now recursively visit other submissions/posts
linked to in the initial set of submissions.
This behaviour can be configured via the 'extractor.reddit.recursion'
key in the configuration file or by `-o recursion=<value>`.
Example:
{"extractor": {
"reddit": {
"recursion": <value>
}}}
Possible values:
* -1 - infinite recursion (don't do this)
* 0 - recursion is disabled (default)
* 1 and higher - maximum recursion level