When no `downloader` is passed to `FFmpegPostProcessor`
an exception was raised trying to get the prefer ffmpeg param.
AttributeError: 'NoneType' object has no attribute 'params'
This fixes and defaults to `False`.
As the "LG Time Machine" (a (not so) smart TV) has a limitation for video dimensions (as for codecs), I take to implement an extra parameter `--pp-params` where we can send extra parameterization for the video converter (post-processor).
Example:
```
$ youtube-dl --recode-video=xvid --pp-params='-s 720x480' -c https://www.youtube.com/watch?v=BE7Qoe2ZiXE
```
That works fine on a 4yo LG Time Machine.
Closes #5733
Without the '--keep-video' option the two files would be downloaded again and even using the option, ffmpeg would be run again, which for some videos can take a long time.
We use a temporary file with ffmpeg so that the final file only exists if it success
* Remove the 'songtitle' field, 'title' can be used instead.
* Remove newlines in the help text, for consistency with other options.
* Add 'from __future__ import unicode_literals'.
* Call '__init__' from the parent class.
* Add test for the format_to_regex method
We need to keep the orginal subtitles information, so that the '--load-info' option can be used to list or select the subtitles again.
We'll also be able to have a separate field for storing the automatic captions info.
For each language the extractor builds a list with the available formats sorted (like for video formats), then YoutubeDL selects one of them using the '--sub-format' option which now allows giving the format preferences (for example 'ass/srt/best').
For each format the 'url' field can be set so that we only download the contents if needed, or if the contents needs to be processed (like in crunchyroll) the 'data' field can be used.
The reasons for this change are:
* We weren't checking that the format given with '--sub-format' was available, checking it in each extractor would be repetitive.
* It allows to easily support giving a format preference.
* The subtitles were automatically downloaded in the extractor, but I think that if you use for example the '--dump-json' option you want to finish as fast as possible.
Currently only the ted extractor has been updated, but the old system still works.
If you run 'while read aurl ; do youtube-dl --extract-audio "${aurl}"; done < path_to_batch_file' (batch_file contains one url per line) each call to youtube-dl consumed some characters and 'read' would assing to 'aurl' a non valid url, something like 'tube.com/watch?v=<id>'.