I'm not happy with the approach used here to make portable installations of Devine, therefore for now I will remove the information relating to portable installations.
For aria2c I've simplified the operation by offloading most of the work for creating a cookie header by just re-doing what Python-requests does. This results in the exact same cookies Python-requests would have used in a requests.get() call or such. It supports multiple of the same-name cookies under different domains/paths based on the URI of the mock request.
It also looks for the "expected 2 but parsed 1" error which is likely an error while parsing the WVD version field. If this happens, it will inform the user to use `pywidevine migrate`.
This is mainly to lessen confusion on service name typo's or new users getting used to the CLI.
It also changed the Exceptions on the methods of Service from ClickException to a KeyError since they are intended to be used on the core codebase outside of the context of Click.
This used to be used even before devine was public, but it was constantly changed back and forth between an urljoin(), another form of urljoin (something custom or something I can't remember), and an if check + addition.
However, I can confirm that a simple if check will not work as the Base URI might not even be in the same relative root. The if checks have also been inconsistent with some checking if it starts with http(s)://, and some checking if it does not have the base URI at the start of the string.
This if check method does not work as well as an urljoin() has the potential to. It also fixes some services as some HLS playlists would have the m3u8 URL on a completely different root, subdomain, or even domain, causing it to completely break when trying to download segments.
We cannot actually do this check. The Content-Length value will be the size after being further encoded or compressed. While we can find out what it was compressed with via the Content-Encoding header, we cannot match the downloaded length with the Content-Length header as requests will automatically decompress/decode according to the Content-Encoding header.
On new installs, or where the `WVDs` folder is not made yet, then the shutil.move() assumes it's a file path and moves the `.wvd` file to the WVDs folder path, as a file. If the folder existed but was empty, this error wouldn't have occurred.
DASH and normal URL downloads now both decrypt one large single or merged file after all downloads are finished. This leaves a bit of a "pause" between progress bar movement which looks a bit odd. So mark the track as being in a Decrypting state.
Since DASH doesn't have the ability to change keys dynamically per-track (Representation), there's no need for the DASH downloader to decrypt segments as they are downloaded (like HLS).
This halves the amount of processes needing to be opened as well as the I/O usage. It may result in noticeably lower CPU usage. Since the IOPS is lowered, you may even see an increase in download speed if downloading to something like a meh HDD.
This also fixes decryption in some weird edge-cases where decrypting each segment individually resulted in timestamp anomalies causing shaka to fail.
Some Servers may not response with the Content-Length header, even if it's from segmented media. I.e., if it's a subtitle URL. The requests downloader required the header to be present as it downloads each URL, which is not possible.
Now it tries to get it if possible, and verifies the download size with the Content-Length value if it could be obtained.
HEAD requests were made to sum a total file size of the download operation. However, the downloader is may be used on URLs where the content is not segmented media. Therefore, the server may not support or respond with the Content-Length header which causes the requests downloader to crash before it even gets a chance to begin downloading.
Even still, this total size value isn't really necessary, and would cause possibly 100s of HEAD requests (in quick succession of each other) on segmented sources. It would also add up-front delay before it actually starts to download.
The setup I had for using asyncio.run with functools.partial didn't actually pan out. A full pass-through lambda is required.
I've also moved the mapped downloader variable to the root of the downloaders package.
Most people used --skip-dl just to license the DRM pre-v1.3.0. Which makes sense, --skip-dl is otherwise a pointless feature. I've fixed it so that --skip-dl worked like before, allowing license calls, while still supporting the new per-segment features post-v1.3.0.
Fixes#37
segments[0] is the first tuple, of two values. The URL and an optional byte range. So this accidentally passed the tuple rather than the URL within the tuple.
Fixes#54
The change to pymp4 may result in it complaining when validating parameters of the tenc box data, i.e., if the version byte is not 0 or 1, e.t.c. It shouldn't do a ConstError on tenc boxes anymore as the definition of the tenc box has been much improved in pymp4 v1.4.0.
pymp4 was updated to automatically do this during parsing and building of tenc boxes. Therefore it would instead fail parsing if the version is not 0 or 1.
It may look like I'm downgrading, but I'm not.
1.5.0 was an incorrect version bump based on TrueDread's fork. v1.4.0 is the next release after, even though it's a lower version. v1.4.0 should be used instead.
Some services accidentally (? I presume) mix up the `tenc` box's data with that of an `avc1` box or similar. This causes total failure and crashing. However, in these scenarios there's usually a 2nd box further down the stream that is not an error and will parse correctly. So just skip these errors and continue.
This got 'broken' after moving to my fork of pymp4 because my fork has commits by TrueDread that add support for the vttc, payl, and sttg boxes, therefore they no longer contain `data` fields but rather specifically parsed fields. I also no longer need to parse the data stream of vttc boxes, as they are already parsed as `children`.
It seems tenc boxes should only be 0 or 1, yet sometimes it can be above that. It seems some services accidentally use the `tenc` atom code over another. E.g., Netflix sometimes using `tenc` instead of `avc1`.