ISO/IEC 14496-15 says about the `HEVCDecoderConfigurationRecord`:
> **array_completeness** when equal to 1 indicates that all NAL units of
> the given type are in the following array and none are in the stream;
> when equal to 0 indicates that additional NAL units of the indicated type
> may be in the stream; the default and permitted values are constrained
> by the sample entry name;
This PR sets `array_completeness` to 0 if parameter NAL units may appear
in the stream when they are not stripped by
`--strip_parameter_set_nalus`.
This should increase player-compatibiltity for streams with mid-stream
SAR changes.
---------
Co-authored-by: Cosmin Stejerean <cstejerean@meta.com>
According to a comment in
packager/third_party/abseil-cpp/source/absl/log/CMakeLists.txt, many
linkers will strip the contents of absl::log_flags because its symbols
symbols are only used in a global constructor, and that for now, clients
should link using
$<LINK_LIBRARY:WHOLE_ARCHIVE,absl::log_flags>.
Closes#1325
Add ability to set `Label` tag in MPD, see page 35 of DASH-IF IOP 4.3
https://dashif.org/docs/DASH-IF-IOP-v4.3.pdf
Implements #881
---------
Co-authored-by: Cosmin Stejerean <cstejerean@meta.com>
This will force the muxer to order streams in the order given on the
command-line.
Closes#560Closes#1280Closes#1313
---------
Co-authored-by: Joey Parrish <joeyparrish@users.noreply.github.com>
Co-authored-by: Cosmin Stejerean <cstejerean@meta.com>
protobuf depends on absl, so both needed an update.
Updating absl fixes issues on Alpine 3.19 (see PR #1327), and also
removes the need for hacks around vlog flags.
Part of https://github.com/shaka-project/shaka-packager/issues/369
This adds read support for some MPEG-TS PMT elementary stream
descriptors:
- ISO639 Language Descriptor providing language code and audio type
- Maximum Bitrate Descriptor providing peak stream bandwidth
Those metadata are propagated to StreamInfo structures:
- StreamInfo.language field
- AudioStreamMetadata.max_bitrate field for audio streams
- audio type is currently not propagated - corresponding field has to be
added to AudioStreamMetadata
Test vector file containing those descriptors is provided.
A positive value, in milliseconds. It is the threshold used to determine
if we should assume that the text stream actually starts at time zero.
If the first sample comes before default_text_zero_bias_ms, then the
start will be padded as the stream is assumed to start at zero. If the
first sample comes after default_text_zero_bias_ms then the start of the
stream will not be padded as we cannot assume the start time of the
stream.
The current mbedtls integration was not working for some modes. See for
example #1316 and also lots of failing integration tests.
For example in pattern encryptor it works on one block at a time so it
cannot assume it's going to always get a buffer with a padding for an
extra block.
From what I can tell when the padding mode is correctly set to
`MBEDTLS_PADDING_NONE` there is no extra block being written to or
required.
This passes all crypto unit tests and integration tests.
Closes#1316
The fix in #1289 was not complete and left the fake clock as null which
didn't have any effect. This was revealed by integration tests showing
mismatches in the timestamps in MP4.
As part of the CMake port we updated the duration formatting to contain
maximum of 6 decimal places but without trailing 0s. There was a bug
however where it used 6 significant digits rather than 6 decimal places
(`%g` rather than `%f`).
This fixes the bug and also updates the MPD sample files for the
integration tests to contain maximum of 6 decimal places.
The current libwebm integration test samples contain `libwebm-0.2.1`
however we have updated to a newer version of libwebm so we need to
update the samples.
As of `libwebm-0.3.0` this signature has been frozen so we won't have to
do this again.
This work was done over ~80 individual commits in the `cmake` branch,
which are now being merged back into `main`. As a roll-up commit, it is
too big to be reviewable, but each change was reviewed individually in
context of the `cmake` branch. After this, the `cmake` branch will be
renamed `cmake-porting-history` and preserved.
---------
Co-authored-by: Geoff Jukes <geoffjukes@users.noreply.github.com>
Co-authored-by: Bartek Zdanowski <bartek.zdanowski@gmail.com>
Co-authored-by: Carlos Bentzen <cadubentzen@gmail.com>
Co-authored-by: Dennis E. Mungai <2356871+Brainiarc7@users.noreply.github.com>
Co-authored-by: Cosmin Stejerean <cstejerean@gmail.com>
Co-authored-by: Carlos Bentzen <carlos.bentzen@bitmovin.com>
Co-authored-by: Cosmin Stejerean <cstejerean@meta.com>
Co-authored-by: Cosmin Stejerean <cosmin@offbytwo.com>
CC version 13 needs `<cstdint>` to be explicitly included to
provide fixed bits integer types.
Some files using it inludes `<stdint.h>`, some are missing direct or
undirect inclusion. This PR adds `<cstdint>` inclusion to the
minimal set of files, allowing compilation on GCC 13.
Closes#1305
As per the AV1 spec, the codec string may contain optional color values.
This extracts the missing color information from the mp4 `colr` atom, if
present, and generates the full AV1 codec string.
Closes#1007
Fix a bug that if the webvtt file is very short, e.g. only contains one
block
WEBVTT
00:00:00.500 --> 00:00:02.000
The Web is always changing
shaka packager will report error: "Packaging Error: 6 (END_OF_STREAM)".
Fixes#1217
Closing the upstream on flush will effectively terminate the ongoing
curl connection. This means that we would need re-establish the
connection in order to resume writing, this is not what we want. In the
spirit of the documentation of File::Flush
```c++
/// Flush the file so that recently written data will survive an
/// application crash (but not necessarily an OS crash). For
/// instance, in LocalFile the data is flushed into the OS but not
/// necessarily to disk.
```
We will instead wait for the curl thread to finish consuming what ever
might be in the upload cache, but leave the connection open for
subsequent writes.
Fixes#1196
# Low Latency DASH - `availabilityTimeComplete=false`
Low Latency DASH manifests generated by Packager were missing the
attribute `availabilityTimeComplete`. As per the [DASH
specs](https://dashif.org/docs/CR-Low-Latency-Live-r8.pdf):
**_the AdaptationSet@availabilityTimeCompleteshould be present and be
set to 'FALSE'_**
## The Issue
The missing attribute caused ULL streams from Shaka Packager to no
longer be compatible with DASH.js. Previous versions of DASH.js allowed
users to specify ULL mode when initializing the player. However, the
most recent releases of DASH.js automatically detect ULL by scanning the
manifest for ULL specific attributes. Although there are many attributes
only associated with ULL, [DASH.js only greps for
`availabilityTimeComplete` in its detection
logic](https://github.com/Dash-Industry-Forum/dash.js/blob/development/src/streaming/controllers/PlaybackController.js#L792-L805).
Because of the missing attribute in Packager and the limited ULL
verification criteria by DASH.js, Packager streams were not being
treated as low latency streams by DASH.js.
## Testing
### Unit Testing
`./mpd_unittest
--gtest_filter="SegmentTemplateTest.OneSegmentLowLatency"`
` ./mpd_unittest
--gtest_filter="LowLatencySegmentTest.LowLatencySegmentTemplate"`
### Manual Testing
- Created a low latency stream with Shaka Packager
- Observed the expected `availabilityTimeComplete=false` attribute in
the generated DASH manifest.
It appears that not all Apple implementations follow the HLS guidelines.
While the DEFAULT=NO for an audio track should be optional and default
to NO, in practice native HLS players Safari and iOS devices treat the
missing DEFAULT as a MAYBE.
Fixes#1169
This issue is observed on Python 3.10+ and above.
This workaround addresses a major backwards compatibility break with a
major Python release version where collections.abc isn't available.
Fixes#1192
A single-line change on #L170 to `wv.protection_scheme =
struct.unpack('>L', bytes(protection_scheme, encoding='utf-8'))[0]`,
needed to work around this issue on Ubuntu 22.04LTS+ running Python
3.10+:
```sh
TypeError: a bytes-like object is required, not 'str'
```
On line 170.
This brings some workflow improvements and fixes from the `cmake` branch
to `main`, as well as some unique fixes to keep gclient working, so that
we can continue to accept contributions in `main` until the `cmake`
merge is ready.
- Fix docs build in GitHub Actions (from `cmake` branch)
- Cancel workflow when a PR is updated (from `cmake` branch)
- Fix docker failures caused by running as root (from `cmake` branch)
- Work around exception in depot_tools on Windows
- Use Windows 2019 images in GitHub Actions for compatibility with gyp
- Remove Docker build on ArchLinux, which no longer supports python2 at
all
- (NOTE: The `cmake` branch is still building on ArchLinux. Docker
builds for Arch will be restored to the `main` branch when the `cmake`
branch is finally merged to `main`.)
Note:
* An xHE-AAC capable encoder will auto adjust the user-specified SAP/RAP
value to the allowed grid where SAP/RAPs can occur.
e.g.: `-rapInterval 5000` (5s) may result in actual SAPs/RAPs every
4.984s.
* To ensure SAP/RAP starts a new segment, Shaka needs to executed with a
"--segment_duration" is less than or equal to that adjusted value.
* If every SAP/RAP should trigger a new segment, just set the segment
length to a very low value e.g.: `--segment_duration 0.1`
Using the latest depot_tools no longer works. depot_tools also wants
to auto-update itself, which must now be disabled.
We also need to disable the copy of python (vpython) included in
depot_tools, since for some distros, it has dependencies on system
libraries that no longer exist.
Finally, we need to force some distros to use python 2, because our
build system is ancient and needs to be ripped out and replaced some
day soon.
This fixes build issues in our CI, our Dockerfiles, and in general on
certain platforms or distros.
Closes#1023
The official, static-linked linux builds were crashing in their use of
getaddrinfo, which libcurl was configured to use. Both getaddrinfo
and all of its alternatives available in glibc fail with static
linking.
We can fix this by configuring libcurl to use libc-ares on Linux
instead. This allows us to keep the benefits of a statically-linked
Linux binary.
Closes#996
Change-Id: Ib4a9eb939813fd165727788726459ef4adf3fc4d
The script in packager/testing/dockers/test_dockers.sh now outputs
more useful info for debugging, uses unique container names per OS so
that the containers can be debugged, and allows filtering to re-run
specific OSes if a build fails.
Change-Id: I0cace282549c093a643009f5e60e7545a039168c
This updates the main Dockerfile and all the docker-based
distro-specific tests. The base OS versions have been updated to
versions that have not reached end-of-life status yet, and the list of
dependencies required has been updated and pruned.
Change-Id: Ibcff2f60e739fd5d999af100af76c40aa91a75bc
In many places, we used std::numeric_limits without including the
proper header. This would build on some Linux distributions, but not
others.
This adds the missing includes, fixing the build on Fedora, among
other distros.
Change-Id: I63e9e37e5973fe23bbdf9868552db51062b1dae4
In one of the low-latency changes, a change was made to HttpFile that
caused responses to HTTP POST requests to go missing. This resulted
in failures to fetch encryption keys.
The breaking change was recommended by me in a PR review, and was not
caught by any unit tests. New tests would be ideal, but I chose to
fix the bug first, rather than leave the repo broken.
This bug was brought to my attention in google/shaka-streamer#87 and
has not appeared in any release versions.
Change-Id: I9eca73d187a8a30f16c4a920fcdb7b4872253858
## The issue
- With LL-DASH mode enabled, the gap size warning was hit and printed to the console every time a new segment was registered to the manifest.
- This occurred because the first chunk's size and duration were being stored for each segment, rather than the full segment size and duration. Note, only the first chunk's metrics are known at first because in low latency mode, the segment is registered to the manifest before it is finished being processed and written.
- Because of this, the gap size check was comparing the end time of the first chunk in the previous segment to the beginning time of the current segment, causing the check to fail every time.
## The Fix
- Update a low latency segment's duration and size once the segment file has been fully written.
- The full segment size and duration will be used to update the bandwidth estimator and the segment info list.
- Updating the segment info list to hold the full duration is necessary for satisfying [the gap size check found in Represenation.cc](https://github.com/google/shaka-packager/blob/master/packager/mpd/base/representation.cc#L391).
- NOTE: bandwidth estimation is currently only used in HLS