mirror of
https://github.com/mikf/gallery-dl.git
synced 2024-11-22 02:32:33 +01:00
revert: options.md
revert: options.md
This commit is contained in:
commit
6f3c3a5d1e
106
docs/options.md
106
docs/options.md
@ -6,53 +6,72 @@
|
||||
## General Options:
|
||||
-h, --help Print this help message and exit
|
||||
--version Print program version and exit
|
||||
-f, --filename FORMAT Filename format string for downloaded files ('/O' for "original" filenames)
|
||||
-f, --filename FORMAT Filename format string for downloaded files
|
||||
('/O' for "original" filenames)
|
||||
-d, --destination PATH Target location for file downloads
|
||||
-D, --directory PATH Exact location for file downloads
|
||||
-X, --extractors PATH Load external extractors from PATH
|
||||
--proxy URL Use the specified proxy
|
||||
--source-address IP Client-side IP address to bind to
|
||||
--user-agent UA User-Agent request header
|
||||
--clear-cache MODULE Delete cached login sessions, cookies, etc. for MODULE (ALL to delete everything)
|
||||
--clear-cache MODULE Delete cached login sessions, cookies, etc. for
|
||||
MODULE (ALL to delete everything)
|
||||
|
||||
## Input Options:
|
||||
-i, --input-file FILE Download URLs found in FILE ('-' for stdin). More than one --input-file can be specified
|
||||
-i, --input-file FILE Download URLs found in FILE ('-' for stdin).
|
||||
More than one --input-file can be specified
|
||||
-I, --input-file-comment FILE
|
||||
Download URLs found in FILE. Comment them out after they were downloaded successfully.
|
||||
Download URLs found in FILE. Comment them out
|
||||
after they were downloaded successfully.
|
||||
-x, --input-file-delete FILE
|
||||
Download URLs found in FILE. Delete them after they were downloaded successfully.
|
||||
Download URLs found in FILE. Delete them after
|
||||
they were downloaded successfully.
|
||||
|
||||
## Output Options:
|
||||
-q, --quiet Activate quiet mode
|
||||
-w, --warning Print only warnings and errors
|
||||
-v, --verbose Print various debugging information
|
||||
-g, --get-urls Print URLs instead of downloading
|
||||
-G, --resolve-urls Print URLs instead of downloading; resolve intermediary URLs
|
||||
-G, --resolve-urls Print URLs instead of downloading; resolve
|
||||
intermediary URLs
|
||||
-j, --dump-json Print JSON information
|
||||
-s, --simulate Simulate data extraction; do not download anything
|
||||
-s, --simulate Simulate data extraction; do not download
|
||||
anything
|
||||
-E, --extractor-info Print extractor defaults and settings
|
||||
-K, --list-keywords Print a list of available keywords and example values for the given URLs
|
||||
-K, --list-keywords Print a list of available keywords and example
|
||||
values for the given URLs
|
||||
-e, --error-file FILE Add input URLs which returned an error to FILE
|
||||
--list-modules Print a list of available extractor modules
|
||||
--list-extractors Print a list of extractor classes with description, (sub)category and example URL
|
||||
--list-extractors Print a list of extractor classes with
|
||||
description, (sub)category and example URL
|
||||
--write-log FILE Write logging output to FILE
|
||||
--write-unsupported FILE Write URLs, which get emitted by other extractors but cannot be handled, to FILE
|
||||
--write-pages Write downloaded intermediary pages to files in the current directory to debug problems
|
||||
--write-unsupported FILE Write URLs, which get emitted by other
|
||||
extractors but cannot be handled, to FILE
|
||||
--write-pages Write downloaded intermediary pages to files in
|
||||
the current directory to debug problems
|
||||
--no-colors Do not emit ANSI color codes in output
|
||||
|
||||
## Downloader Options:
|
||||
-r, --limit-rate RATE Maximum download rate (e.g. 500k or 2.5M)
|
||||
-R, --retries N Maximum number of retries for failed HTTP requests or -1 for infinite retries (default: 4)
|
||||
-R, --retries N Maximum number of retries for failed HTTP
|
||||
requests or -1 for infinite retries (default: 4)
|
||||
--http-timeout SECONDS Timeout for HTTP connections (default: 30.0)
|
||||
--sleep SECONDS Number of seconds to wait before each download. This can be either a constant value or a range (e.g. 2.7 or 2.0-3.5)
|
||||
--sleep-request SECONDS Number of seconds to wait between HTTP requests during data extraction
|
||||
--sleep-extractor SECONDS Number of seconds to wait before starting data extraction for an input URL
|
||||
--filesize-min SIZE Do not download files smaller than SIZE (e.g. 500k or 2.5M)
|
||||
--filesize-max SIZE Do not download files larger than SIZE (e.g. 500k or 2.5M)
|
||||
--sleep SECONDS Number of seconds to wait before each download.
|
||||
This can be either a constant value or a range
|
||||
(e.g. 2.7 or 2.0-3.5)
|
||||
--sleep-request SECONDS Number of seconds to wait between HTTP requests
|
||||
during data extraction
|
||||
--sleep-extractor SECONDS Number of seconds to wait before starting data
|
||||
extraction for an input URL
|
||||
--filesize-min SIZE Do not download files smaller than SIZE (e.g.
|
||||
500k or 2.5M)
|
||||
--filesize-max SIZE Do not download files larger than SIZE (e.g.
|
||||
500k or 2.5M)
|
||||
--chunk-size SIZE Size of in-memory data chunks (default: 32k)
|
||||
--no-part Do not use .part files
|
||||
--no-skip Do not skip downloads; overwrite existing files
|
||||
--no-mtime Do not set file modification times according to Last-Modified HTTP response headers
|
||||
--no-mtime Do not set file modification times according to
|
||||
Last-Modified HTTP response headers
|
||||
--no-download Do not download any files
|
||||
--no-postprocessors Do not run any post processors
|
||||
--no-check-certificate Disable HTTPS certificate validation
|
||||
@ -74,18 +93,32 @@
|
||||
-C, --cookies FILE File to load additional cookies from
|
||||
--cookies-export FILE Export session cookies to FILE
|
||||
--cookies-from-browser BROWSER[/DOMAIN][+KEYRING][:PROFILE][::CONTAINER]
|
||||
Name of the browser to load cookies from, with optional domain prefixed with '/', keyring name prefixed with '+', profile prefixed with ':', and container
|
||||
prefixed with '::' ('none' for no container)
|
||||
Name of the browser to load cookies from, with
|
||||
optional domain prefixed with '/', keyring name
|
||||
prefixed with '+', profile prefixed with ':',
|
||||
and container prefixed with '::' ('none' for no
|
||||
container)
|
||||
|
||||
## Selection Options:
|
||||
--download-archive FILE Record all downloaded or skipped files in FILE and skip downloading any file already in it
|
||||
-A, --abort N Stop current extractor run after N consecutive file downloads were skipped
|
||||
-T, --terminate N Stop current and parent extractor runs after N consecutive file downloads were skipped
|
||||
--range RANGE Index range(s) specifying which files to download. These can be either a constant value, range, or slice (e.g. '5', '8-20', or '1:24:3')
|
||||
--chapter-range RANGE Like '--range', but applies to manga chapters and other delegated URLs
|
||||
--filter EXPR Python expression controlling which files to download. Files for which the expression evaluates to False are ignored. Available keys are the filename-specific
|
||||
ones listed by '-K'. Example: --filter "image_width >= 1000 and rating in ('s', 'q')"
|
||||
--chapter-filter EXPR Like '--filter', but applies to manga chapters and other delegated URLs
|
||||
--download-archive FILE Record all downloaded or skipped files in FILE
|
||||
and skip downloading any file already in it
|
||||
-A, --abort N Stop current extractor run after N consecutive
|
||||
file downloads were skipped
|
||||
-T, --terminate N Stop current and parent extractor runs after N
|
||||
consecutive file downloads were skipped
|
||||
--range RANGE Index range(s) specifying which files to
|
||||
download. These can be either a constant value,
|
||||
range, or slice (e.g. '5', '8-20', or '1:24:3')
|
||||
--chapter-range RANGE Like '--range', but applies to manga chapters
|
||||
and other delegated URLs
|
||||
--filter EXPR Python expression controlling which files to
|
||||
download. Files for which the expression
|
||||
evaluates to False are ignored. Available keys
|
||||
are the filename-specific ones listed by '-K'.
|
||||
Example: --filter "image_width >= 1000 and
|
||||
rating in ('s', 'q')"
|
||||
--chapter-filter EXPR Like '--filter', but applies to manga chapters
|
||||
and other delegated URLs
|
||||
|
||||
## Post-processing Options:
|
||||
-P, --postprocessor NAME Activate the specified post processor
|
||||
@ -96,7 +129,16 @@
|
||||
--write-tags Write image tags to separate text files
|
||||
--zip Store downloaded files in a ZIP archive
|
||||
--cbz Store downloaded files in a CBZ archive
|
||||
--mtime NAME Set file modification times according to metadata selected by NAME. Examples: 'date' or 'status[date]'
|
||||
--ugoira FORMAT Convert Pixiv Ugoira to FORMAT using FFmpeg. Supported formats are 'webm', 'mp4', 'gif', 'vp8', 'vp9', 'vp9-lossless', 'copy'.
|
||||
--exec CMD Execute CMD for each downloaded file. Supported replacement fields are {} or {_path}, {_directory}, {_filename}. Example: --exec "convert {} {}.png && rm {}"
|
||||
--exec-after CMD Execute CMD after all files were downloaded. Example: --exec-after "cd {_directory} && convert * ../doc.pdf"
|
||||
--mtime NAME Set file modification times according to
|
||||
metadata selected by NAME. Examples: 'date' or
|
||||
'status[date]'
|
||||
--ugoira FORMAT Convert Pixiv Ugoira to FORMAT using FFmpeg.
|
||||
Supported formats are 'webm', 'mp4', 'gif',
|
||||
'vp8', 'vp9', 'vp9-lossless', 'copy'.
|
||||
--exec CMD Execute CMD for each downloaded file. Supported
|
||||
replacement fields are {} or {_path},
|
||||
{_directory}, {_filename}. Example: --exec
|
||||
"convert {} {}.png && rm {}"
|
||||
--exec-after CMD Execute CMD after all files were downloaded.
|
||||
Example: --exec-after "cd {_directory} &&
|
||||
convert * ../doc.pdf"
|
||||
|
Loading…
Reference in New Issue
Block a user