- Removed `devine auth` command and sub-commands due to lack of support, risk of data, and general quirks of it.
- Removed `profiles` config data, you must now specify which profile you wish to use each time with -p/--profile. If you use a specific profile a lot more than others, you should make it the default. See below.
- Added a `default` key to each service mapping in `credentials` that will be used if -p/--profile is not specified.
- Each service mapping in `credentials` is no longer forced to use profiles. You can now simply specify `Service: username:password` if you only use one credential.
- Auth-less Services now simply have to specify no credential and have no cookie file.
- There is no longer an error for not having a cookie and/or credential for the chosen profile, as a profile no longer has to be chosen.
- Cookies are now checked for in 3 different locations in the following order:
1. `/Cookies/{Service Name}.txt`
2. `/Cookies/Service Name/{profile}.txt`
3. `/Cookies/Service Name/default.txt`
This means you now have more options on organization and layout of Cookie files, similarly to the new Credentials config.
Note: `/Cookies/Service Name/.txt` also works as an alternative to `default.txt`. The benefit of this is `.txt` will always be at the top of your folder.
This is to reduce the amount of required dependencies by not strictly requiring aria2c out of the box. You can always change the downloader back to aria2c in the config.
This fixes some Subtitles having e.g., `&` instead of just `&`, but especially for special entities like `‏` which enables Right-to-Left mode on Hebrew and Arabic Subtitles.
Previously, it would show the download as fully complete after the first 1024-byte chunk was downloaded, as the Progress Bar total value was set to the amount of URLs. This is because it assumed there would be multiple URLs to download at once, and would advance the progress bar each time one of the downloads completed instead.
This changes it so that if there's only one URL to download, then it calculates the total amount of chunks to download which corrects the progress bar advances.
It normally auto-detects the format from the file extension. The supports formats are "MP4" and "WEBM". The input files to shaka-packager are currently always ".mp4", so this isn't particularly an issue.
However, I want to add this just as a pre-caution in case it isn't. This isn't an issue if the input file is another format, like WEBM, as this only controls the output format, the format devine wants, not the input and output format.
To possibly support download resuming in the future, the file names for the decrypt, repack, and change range functions were simplified and once output has finished it then deletes the original input file and re-uses the original input file path.
The file names were changed to just append `_repack`, `_decrypted`, `_full_range` etc. to the filename rather than using a duplex extension (`.repack.mp4`, `.decrypted.mp4`, `.range0.mp4`).
This is all so that code to check if the file was already downloaded can be simpler. Instead of having to check if 4x different possible file names for a completed download existed, it checks one.
This issue is common with Now TV where it for some reason parses into "two" languages. "en" and "eng". This results in one empty caption list, and one non empty caption list. The empty caption list tends to be first.
This issue causes a multitude of snowballing problems later down the codebase like when converting to SRT it will result in "MULTI-LANGUAGE SRT" header, which most programs do not recognize, like mkvmerge, causing a mux failure.
The browser to imitate can be set in the config:
For example,
```yaml
curl_impersonate:
browser: chrome110
```
It will default to using chrome110 if no value is set in the config.
A list of available Browsers are listed here: https://github.com/yifeikong/curl_cffi#sessions
This was originally done to prevent *all* aria2c logs unless on the last attempt, at which if it failed all attempts it would let aria2c log the error.
However, that's bad practice as aria2c may produce errors or warnings on say the 3rd attempt, and the 3rd attempt may have otherwise succeeded, with warnings or errors. It also generally shouldn't be necessary.
The original values would cause blocks by some Services. Therefore, it is better to default to safer values. The new values match the defaults used by aria2c as listed in their docs.
For downloads by devine, there's generally no reason to retrieve this information when it will be decrypted, repacked, remuxed, and so on anyway. Requesting the timestamp will just mean more requests being made, perhaps slowing down the download.
This was added by another team member a long time ago, seemingly for the purposes of preventing a split on DASH/HLS segment files, as they would be already quite small.
However, just because they are small it isn't exactly a problem to have it split, and it would only split if the segment file size fits the default split size of 20M at least twice. I.e., if the segment is 45M, it will split twice. If the segment is 25M, it actually won't split at all. You may think 25M will split by 20M into two downloads, but actually the split size must explicitly fit for it to split. So for 2 downloads it will need to be 40MB in size, then 60, then 80, and so on.
A 40M or bigger segment file does in my opinion deserve to be split as it may genuinely reap speed benefits.
If the Server returns a Content-Length Header with a value of 0, then the code near-after it would end up looping response streamed chunks of 0-length size, which would go on forever.
This is seen in some manifests/services for whatever reason. I can't find documentation for this value anywhere. It seems unused in official specifications as of right now. However, it seems in some services/places it is unofficially used as a PAL-version of BT-601 transfer, which makes sense.
Devine's code (and other services) wouldn't care about the difference here so currently it is just implemented as a remap from 5 to 6. In the future it may be changed and actually defined as two seperate BT_601 Transfer enum entries.
This also bypasses the warning log about the audio likely being part of an invariant playlist, which may be true it is too specific of a warning when it could be multiple other reasons why.
Ok, so there's a few reasons this was done.
1) Design-wise it isn't valid to have --proxy (or via config/otherwise) set a proxy, then unpredictably have it bypassed or disabled. If I specify `--proxy 127.0.0.1:8080`, I would expect it to use that proxy for all communication indefinitely, not switch in and out depending on the track or service.
2) With reason 1, it's also a security problem. The only reason I implemented it in the first place was so I could download faster on my home connection. This means I would authenticate and call APIs under a proxy, then suddenly download manifests and segments e.t.c under my home connection. A competent service could see that as an indicator of bad play and flag you.
3) Maintaining this setup across the codebase is extremely annoying, especially because of how proxies are setup/used by Requests in the Session. There's no way to tell a request session to temporarily disable the proxy and turn it back on later, without having to get the proxy from the session (in an annoying way) store it, then remove it, make the calls, then assuming your still in the same function you can add it back. If you're not in the same function, well, time for some spaghetti code.
---
tldr; -1 ux/design/expectations with CLI, -1 security aspect, -1 code maintenance, but only +1 for potentially increased download speeds in certain scenarios.
Services needing this done should apply it themselves, e.g. OnMultiplex. A convenience function to do it is available now as `Subtitle.remove_multi_lang_srt_header()`, so you can do e.g., `subtitle.OnMultiplex = remove_multi_lang_srt_header` and it will pass through this function just before muxing.
On Windows it seems to default to some encoding other than UTF-8 (possibly UTF-16 or CP-1252) and since the chapter file is saved as UTF-8, it breaks characters outside typical range. Like ø, æ, and other stuff.
The default is still SubRip SRT, but you can now change the output format to almost any of the available Codec options. There is no option to leave the subtitle format as-is yet. I.e., if there's a SRT and WebVTT subtitle, leave them both as-is.
Like always, you can configure a default in your config file, e.g.,
```yaml
dl:
sub_format: vtt
```
Note though that SSA, SSAv4, fTTML, and fVTT are not yet supported. There are no plans to support fTTML or fVTT.
Chardet was detecting a mixture of mostly cp1252 and MacRoman encoding, where it should just be left as-is when parsing. The actual text within it perhaps may want to go through `try_ensure_utf8` when parsed, but not the entire box.
* Add option for automatic subtitle character encoding normalization
The rationale behind this function is that some services use ISO-8859-1
(latin1) or Windows-1252 (CP-1252) instead of UTF-8 encoding, whether
intentionally or accidentally. Some services even stream subtitles with
malformed/mixed encoding (each segment has a different encoding).
* Remove Subtitle parameter `auto_fix_encoding`
Just always attempt to fix encoding. If the subtitle is neither UTF-8 nor CP-1252, then it should realistically error out instead of producing garbage Subtitle data anyway.
* Move Subtitle encoding fixing code out of if drm tree
* Use chardet as a last ditch effort fixing Subs, or return original data
* Move Subtitle.fix_encoding method to utilities as try_ensure_utf8
* Add Shivelight as a contributor
---------
Co-authored-by: rlaphoenix <rlaphoenix@pm.me>
Note: There is some breaking changes here. If you manually worked with the Enum names here, then some of them have changed to better reflect the code points usage.
Generally speaking it should not affect service code.
For aria2c I've simplified the operation by offloading most of the work for creating a cookie header by just re-doing what Python-requests does. This results in the exact same cookies Python-requests would have used in a requests.get() call or such. It supports multiple of the same-name cookies under different domains/paths based on the URI of the mock request.
It also looks for the "expected 2 but parsed 1" error which is likely an error while parsing the WVD version field. If this happens, it will inform the user to use `pywidevine migrate`.
This is mainly to lessen confusion on service name typo's or new users getting used to the CLI.
It also changed the Exceptions on the methods of Service from ClickException to a KeyError since they are intended to be used on the core codebase outside of the context of Click.
This used to be used even before devine was public, but it was constantly changed back and forth between an urljoin(), another form of urljoin (something custom or something I can't remember), and an if check + addition.
However, I can confirm that a simple if check will not work as the Base URI might not even be in the same relative root. The if checks have also been inconsistent with some checking if it starts with http(s)://, and some checking if it does not have the base URI at the start of the string.
This if check method does not work as well as an urljoin() has the potential to. It also fixes some services as some HLS playlists would have the m3u8 URL on a completely different root, subdomain, or even domain, causing it to completely break when trying to download segments.
We cannot actually do this check. The Content-Length value will be the size after being further encoded or compressed. While we can find out what it was compressed with via the Content-Encoding header, we cannot match the downloaded length with the Content-Length header as requests will automatically decompress/decode according to the Content-Encoding header.
On new installs, or where the `WVDs` folder is not made yet, then the shutil.move() assumes it's a file path and moves the `.wvd` file to the WVDs folder path, as a file. If the folder existed but was empty, this error wouldn't have occurred.
DASH and normal URL downloads now both decrypt one large single or merged file after all downloads are finished. This leaves a bit of a "pause" between progress bar movement which looks a bit odd. So mark the track as being in a Decrypting state.
Since DASH doesn't have the ability to change keys dynamically per-track (Representation), there's no need for the DASH downloader to decrypt segments as they are downloaded (like HLS).
This halves the amount of processes needing to be opened as well as the I/O usage. It may result in noticeably lower CPU usage. Since the IOPS is lowered, you may even see an increase in download speed if downloading to something like a meh HDD.
This also fixes decryption in some weird edge-cases where decrypting each segment individually resulted in timestamp anomalies causing shaka to fail.
Some Servers may not response with the Content-Length header, even if it's from segmented media. I.e., if it's a subtitle URL. The requests downloader required the header to be present as it downloads each URL, which is not possible.
Now it tries to get it if possible, and verifies the download size with the Content-Length value if it could be obtained.
HEAD requests were made to sum a total file size of the download operation. However, the downloader is may be used on URLs where the content is not segmented media. Therefore, the server may not support or respond with the Content-Length header which causes the requests downloader to crash before it even gets a chance to begin downloading.
Even still, this total size value isn't really necessary, and would cause possibly 100s of HEAD requests (in quick succession of each other) on segmented sources. It would also add up-front delay before it actually starts to download.
The setup I had for using asyncio.run with functools.partial didn't actually pan out. A full pass-through lambda is required.
I've also moved the mapped downloader variable to the root of the downloaders package.
Most people used --skip-dl just to license the DRM pre-v1.3.0. Which makes sense, --skip-dl is otherwise a pointless feature. I've fixed it so that --skip-dl worked like before, allowing license calls, while still supporting the new per-segment features post-v1.3.0.
Fixes#37
segments[0] is the first tuple, of two values. The URL and an optional byte range. So this accidentally passed the tuple rather than the URL within the tuple.
Fixes#54
The change to pymp4 may result in it complaining when validating parameters of the tenc box data, i.e., if the version byte is not 0 or 1, e.t.c. It shouldn't do a ConstError on tenc boxes anymore as the definition of the tenc box has been much improved in pymp4 v1.4.0.
pymp4 was updated to automatically do this during parsing and building of tenc boxes. Therefore it would instead fail parsing if the version is not 0 or 1.
Some services accidentally (? I presume) mix up the `tenc` box's data with that of an `avc1` box or similar. This causes total failure and crashing. However, in these scenarios there's usually a 2nd box further down the stream that is not an error and will parse correctly. So just skip these errors and continue.
This got 'broken' after moving to my fork of pymp4 because my fork has commits by TrueDread that add support for the vttc, payl, and sttg boxes, therefore they no longer contain `data` fields but rather specifically parsed fields. I also no longer need to parse the data stream of vttc boxes, as they are already parsed as `children`.
It seems tenc boxes should only be 0 or 1, yet sometimes it can be above that. It seems some services accidentally use the `tenc` atom code over another. E.g., Netflix sometimes using `tenc` instead of `avc1`.
It's not very appealing, nor is it info the user realistically needs for every download. They could check `devine env info` to see where downloads go if they are unsure.
This happens because the input file has no actual useful data, likely just a bunch of headers and nothing of use. Therefore shaka skips and returns OK, yet didnt make any file at the decrypted path.
This fixes a crash when it tries to move the non-existent decrypted file to the input file location, causing the rest of the download to halt.
Negative size values are not allowed by the spec basically anywhere in the document. Some services seem to accidentally specify a negative value which puts pycaption on a fritz.
While it already has anti-duplicate checks, these checks did not take into account the `*` indicator, or `from x vault` e.t.c. This rework of the duplicate check ignores all messages.
This will improve efficiency and accuracy of getting appropriate DRM systems when downloading segments.
This can dramatically improve download speed from less than 50 kb/s to full speed if the HLS playlist used a lot of AES-128 EXT-X-KEYs. E.g., a unique key for each segment.
This was caused because the HLS.get_drm function took EVERY EXT-X-KEY, checked for supported systems, loaded them, and returned the supported objects. This meant it could load possibly 100s of AES-128 ClearKey objects (likely requiring URL downloads for the key URI) causing a huge delay before downloading each segment.
If you use --cdm-only, you will end up licensing multiple times if the PSSH has more than one Key ID. While this is checked for, with the KID check now being more lenient, it will end up continuing and licensing again.
However, even with the original check code this would have been pointless. If the first license did not return a content key for a KID, then the next license call with the exact same parameters wouldn't have either
For ex., if a service has the same PSSH or license call for 720p and 1080p video tracks, but it doesn't return a KID for the 1080p track, then the previous code would return an error, even though it has enough content key data to continue.
With this change it now only raises the error if the track's exact KID was not licensed. This adds support to prepare_drm for specifying the track's KID.
- DASH and HSL tracks must now explicitly provide the URL to download as the init segment. This is because the original code assumed the first segment, or first init segment was the one that the caller wants, which may not be the case (e.g., and ad is the first init segment).
It now supports explicitly providing a byte-range to download, as well as modifying the fallback content size of 20KB.
It now also checks if the server supports the HTTP Range header and uses it over the hacky request-streaming method. It also checks for the file size and uses that over the fallback size as long as it's not bigger than 100 KB.
Overall it's now more dynamic to specific use-cases, and more efficient in various ways.
These only happen if we intentionally cancel the process, or failed. However, this is something that is generally obvious given the args, and when cancelling a wall of these logs would appear.