HLS playlists where each segment is in an mp4 container seems to corrupt when the EXT-X-MAP is changed out, unless you first merge segments by discontinuity and then merge the merges via FFmpeg (which demuxes all the merged segment continuities and then concatanates them together, probably giving it new init data too).
Previously, all entities were decoded in Subtitle files because of a problem with SubtitleEdit and it's /ReverseRtlStartEnd option not being entity-aware.
It actually ends up reversing the `;` of `‏`, instead of the actual value of `‏`. Therefore, I decoded all entities before SubtitleEdit could have processed the Subtitle, but this has caused problems with more advanced formats like TTML and WebVTT as `<` would decode to `<` causing syntax errors, among other problematic characters.
According to the TTML and WebVTT spec, html entity encoding is allowed, and that makes sense or you wouldn't be able to use `<` etc. Any failure for players to show the decoded character would be a player problem and be out of scope with Devine.
This is because SubtitleEdit keeps color-related information when converting to SRT from WebVTT, TTML, and such formats. Why? Not 100% sure. Maybe some players support colors, but generally if you are using SubRip, it's because you either only want basic text subs, or your player doesn't support these "fancy" ooh-la-la colors.
This is a better solution to just stripped out the information. As the option name suggests, it isn't just removing the color information but rather using it to detect different speakers, then appropriately "dialogify" the captions when needed. I.e., start each speaker's sentence with `- `, and separate them with a new line.
The dash-style dialog formatting is quite vital to know if a caption is all spoken by one speaker versus multiple. Not particularly necessary for non-SDH captioning, but would be wanted for SDH subtitles.
Overall this commit is to just make working with Chapters a lot less manual and convoluted. The current system has you specify information that can easily be automated, like Chapter order and numbers, which is one of the main changes in this commit.
Note: This is a Breaking change and requires updates to your Service code. The `get_chapters()` method must be updated. For more information see the updated doc-string for `Service.get_chapters()`.
- Added new Chapters class which automatically sorts Chapters by timestamp.
- Chapter class has been significantly reworked to be much more generic. Most operations have been mvoed to the new Chapters class.
- Chapter objects can no longer specify a Chapter number. The number is now automatically set based on it's sorted order in the Chapters object, which is all done automatically.
- Chapter objects can now provide a timestamp in more formats. Timestamp's are now verified more efficiently.
- Chapter objects ID is now a crc32 hash of the timestamp and name instead of just basically their number.
- The Chapters object now also has an ID which is also a crc32 hash of all of the Chapter IDs it holds. This ID can be used for stuff like temp paths.
- `Service.get_chapters()` must now return a Chapters object. The Chapters object may be empty. The Chapters object must hold Chapter objects.
- Using `Chapter {N}` or `Act {N}` Chapters and so on is no longer permitted. You should instead leave the name blank if there's no descriptive name to use for it.
- If you or a user wants `Chapter {N}` names, then they can use the config option `chapter_fallback_name` set to `"Chapter {i:02}"`. See the config documentation for more info.
- Do not add a `00:00:00.000` Chapter, at all. This is automatically added for you if there's at least 1 Chapter with a timestamp after `00:00:00.000`.
- Removed `devine auth` command and sub-commands due to lack of support, risk of data, and general quirks of it.
- Removed `profiles` config data, you must now specify which profile you wish to use each time with -p/--profile. If you use a specific profile a lot more than others, you should make it the default. See below.
- Added a `default` key to each service mapping in `credentials` that will be used if -p/--profile is not specified.
- Each service mapping in `credentials` is no longer forced to use profiles. You can now simply specify `Service: username:password` if you only use one credential.
- Auth-less Services now simply have to specify no credential and have no cookie file.
- There is no longer an error for not having a cookie and/or credential for the chosen profile, as a profile no longer has to be chosen.
- Cookies are now checked for in 3 different locations in the following order:
1. `/Cookies/{Service Name}.txt`
2. `/Cookies/Service Name/{profile}.txt`
3. `/Cookies/Service Name/default.txt`
This means you now have more options on organization and layout of Cookie files, similarly to the new Credentials config.
Note: `/Cookies/Service Name/.txt` also works as an alternative to `default.txt`. The benefit of this is `.txt` will always be at the top of your folder.
This is to reduce the amount of required dependencies by not strictly requiring aria2c out of the box. You can always change the downloader back to aria2c in the config.
This fixes some Subtitles having e.g., `&` instead of just `&`, but especially for special entities like `‏` which enables Right-to-Left mode on Hebrew and Arabic Subtitles.
Previously, it would show the download as fully complete after the first 1024-byte chunk was downloaded, as the Progress Bar total value was set to the amount of URLs. This is because it assumed there would be multiple URLs to download at once, and would advance the progress bar each time one of the downloads completed instead.
This changes it so that if there's only one URL to download, then it calculates the total amount of chunks to download which corrects the progress bar advances.
It normally auto-detects the format from the file extension. The supports formats are "MP4" and "WEBM". The input files to shaka-packager are currently always ".mp4", so this isn't particularly an issue.
However, I want to add this just as a pre-caution in case it isn't. This isn't an issue if the input file is another format, like WEBM, as this only controls the output format, the format devine wants, not the input and output format.
To possibly support download resuming in the future, the file names for the decrypt, repack, and change range functions were simplified and once output has finished it then deletes the original input file and re-uses the original input file path.
The file names were changed to just append `_repack`, `_decrypted`, `_full_range` etc. to the filename rather than using a duplex extension (`.repack.mp4`, `.decrypted.mp4`, `.range0.mp4`).
This is all so that code to check if the file was already downloaded can be simpler. Instead of having to check if 4x different possible file names for a completed download existed, it checks one.
This issue is common with Now TV where it for some reason parses into "two" languages. "en" and "eng". This results in one empty caption list, and one non empty caption list. The empty caption list tends to be first.
This issue causes a multitude of snowballing problems later down the codebase like when converting to SRT it will result in "MULTI-LANGUAGE SRT" header, which most programs do not recognize, like mkvmerge, causing a mux failure.
The browser to imitate can be set in the config:
For example,
```yaml
curl_impersonate:
browser: chrome110
```
It will default to using chrome110 if no value is set in the config.
A list of available Browsers are listed here: https://github.com/yifeikong/curl_cffi#sessions
This was originally done to prevent *all* aria2c logs unless on the last attempt, at which if it failed all attempts it would let aria2c log the error.
However, that's bad practice as aria2c may produce errors or warnings on say the 3rd attempt, and the 3rd attempt may have otherwise succeeded, with warnings or errors. It also generally shouldn't be necessary.
The original values would cause blocks by some Services. Therefore, it is better to default to safer values. The new values match the defaults used by aria2c as listed in their docs.
For downloads by devine, there's generally no reason to retrieve this information when it will be decrypted, repacked, remuxed, and so on anyway. Requesting the timestamp will just mean more requests being made, perhaps slowing down the download.
This was added by another team member a long time ago, seemingly for the purposes of preventing a split on DASH/HLS segment files, as they would be already quite small.
However, just because they are small it isn't exactly a problem to have it split, and it would only split if the segment file size fits the default split size of 20M at least twice. I.e., if the segment is 45M, it will split twice. If the segment is 25M, it actually won't split at all. You may think 25M will split by 20M into two downloads, but actually the split size must explicitly fit for it to split. So for 2 downloads it will need to be 40MB in size, then 60, then 80, and so on.
A 40M or bigger segment file does in my opinion deserve to be split as it may genuinely reap speed benefits.
If the Server returns a Content-Length Header with a value of 0, then the code near-after it would end up looping response streamed chunks of 0-length size, which would go on forever.