As per the AV1 spec, the codec string may contain optional color values.
This extracts the missing color information from the mp4 `colr` atom, if
present, and generates the full AV1 codec string.
Closes#1007
In many places, we used std::numeric_limits without including the
proper header. This would build on some Linux distributions, but not
others.
This adds the missing includes, fixing the build on Fedora, among
other distros.
Change-Id: I63e9e37e5973fe23bbdf9868552db51062b1dae4
## The issue
- With LL-DASH mode enabled, the gap size warning was hit and printed to the console every time a new segment was registered to the manifest.
- This occurred because the first chunk's size and duration were being stored for each segment, rather than the full segment size and duration. Note, only the first chunk's metrics are known at first because in low latency mode, the segment is registered to the manifest before it is finished being processed and written.
- Because of this, the gap size check was comparing the end time of the first chunk in the previous segment to the beginning time of the current segment, causing the check to fail every time.
## The Fix
- Update a low latency segment's duration and size once the segment file has been fully written.
- The full segment size and duration will be used to update the bandwidth estimator and the segment info list.
- Updating the segment info list to hold the full duration is necessary for satisfying [the gap size check found in Represenation.cc](https://github.com/google/shaka-packager/blob/master/packager/mpd/base/representation.cc#L391).
- NOTE: bandwidth estimation is currently only used in HLS
# LL-DASH Support
These changes add support for LL-DASH streaming.
**NOTE:** LL-HLS support is still in progress, but it's coming. :)
## Testing
`./chunking_unittest --gtest_filter="ChunkingHandlerTest.LowLatencyDash"`
`./media_event_unittest --gtest_filter="MpdNotifyMuxerListenerTest.LowLatencyDash"`
`./mpd_unittest --gtest_filter="PeriodTest.LowLatencyDashMpdGetXml"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifyAvailabilityTimeOffset"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifySegmentDuration"`
`./mpd_unittest --gtest_filter="LowLatencySegmentTest.LowLatencySegmentTemplate"`
Note, packager_test must be run from the main project directory
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
This converts all time parameters to signed, finishing a cleanup that
was started in 2018 in b4256bf0. This changes the type of:
- timestamps
- PTS specifically
- timestamp offsets
- timescales
- durations
This excludes:
- MP4 box definitions
- DTS specifically
This is meant to address signed/unsigned conversion issues on arm64
that caused some test cases to fail.
Change-Id: Ic752a20cbc6e31fea6bc0894d1771833171e7cbe
It is not working correctly in gcc 4.8 or earlier, which is still
popular (bundled by default in CentOS 7).
Issue #865, #929.
Change-Id: I136446a70831bd0237cd29646dd349fe7558176b
Legacy players, e.g. older versions of ExoPlayer, do not handle default webvtt text alignment correctly. Need to specify `align:center` explicitly cues without text alignment for backwards compatibility.
Fixes#925.
It is not working correctly in gcc 4.8 or earlier, which is still
popular (e.g. bundled by default in CentOS 7).
Fixes#865, #929.
Change-Id: I55a42428dbd2a12fc2c3b1e6a49fdd662a295dca
This also allows setting the language of different text streams from
the same input. Multiple streams can use the same input stream
using different cc_index values and can each use a different language.
This also will try to pull the language from the input if not
specified.
Change-Id: I7078710b509b7d77dad8cb4299a82f954af7e9e7
Note that this only supports a single page within the DVB-sub stream.
Multiple pages will be merged together. A follow-up will allow
selecting a specific page.
This only supports outputting using TTML or MP4+TTML; you cannot have
DVB-sub output nor can you output it in WebVTT. Since DVB-sub
uses images, it is hard to impossible to do this with WebVTT.
This also only supports interlaced images, not progressive images
nor text.
Closes#832
Change-Id: Id6dbb6393c7b9a05722e61c6bd255bef5e69a7d8
Issue #149
Co-authored-by: Andreas Motl <andreas.motl@elmyra.de>
Co-authored-by: Rintaro Kuroiwa <rkuroiwa@google.com>
Co-authored-by: Ole Andre Birkedal <o.birkedal@sportradar.com>
I.e. the flag --generate_sidx_in_media_segments,
--nogenerate_sidx_in_media_segments work for both single-segment
and multi-segment mode with this change.
Related to #862.
Change-Id: Icd27fd00e8e036ba0c4709b48650372429cc0351
The reference count in 'sidx' box is a uint16 field, which allows at
most 0xFFFF entries, i.e. at most 0xFFFF subsegments, which is roughly
18 hours for one second segments.
Do not fail packaging when it happens. Instead, generate a warning and
truncate the number of references to 0xFFFF instead.
Note that the actual number of mp4 fragments in the mp4 file can still
be more than 0xFFFF. The stream will not play to the end in DASH, but
it will play successfully in HLS.
Workarounds #862.
Change-Id: Ib3930418d1528df1f9ea64cda0d0ebaa78d26abb
This also changes the callbacks a bit to (a) avoid passing references
for already ref-counted types, and (b) don't pass PID since the
parent knows this and gives it to the child parser.
Issue #832
Change-Id: I7dd44436c8d1ad81d42a813d16f850175b85ad1a
This changes the default MP4 output to use TTML and adds a way to
choose which one is used. This is done with 'format=ttml+mp4' or
'format=vtt+mp4'.
This also fixes the boxes output in WebVTT in MP4.
Change-Id: Ieaa7fc44fbf4dc020a5bb70cfa3578ec10e088ce
This only supports TTML output; meaning the user can convert WebVTT
into TTML, but not the other way around. This will be useful for
DVB-sub subtitles that would be better supported within TTML.
This only adds text-based output; a follow-up will add MP4 support.
Change-Id: I0944b7df95d7765e55f203fc5e9a644f5c455dd8
This adds a new path when parsing MPEG2-TS streams to ignore unsupported
streams. This allows extracting supported streams when some of the
streams are unsupported. For example, you can extract audio from a
file that has unsupported video.
Change-Id: I608fcb19d0a573bfd35e9272f60b0b69346ae11a
This adds more generic settings for regions and CSS styles. These are
global settings, so they go on the StreamInfo object.
Change-Id: Ibb76c060206152ccf8e9a067c09877226f67c927
Now text cues are composed of nested fragments that can be individually
styled. This allows portions of the cue to be bold, etc. The
WebVTT parser doesn't parse the inputs, but the original tags are
preserved in WebVTT output. The WebVTT output will add tags if the
style elements are present in the cue object.
Change-Id: I6abba4175e376e4f753193f7d8cac63e958d3c89
Now the Cue settings are a generic object that is parsed in WebVTT.
This will allow setting the settings in different parsers without having
to use WebVTT-specifics.
Change-Id: I36689bec725bd2e515af962b7174fc5977f96fa2
This sets the groundwork for more generic text cues by having a more
generic object for the settings and the body. This also changes the
TextSample to be immutable and accepts the fields in the constructor
instead of using setters.
Change-Id: I76b09ce8e8471a49e6bf447e8c187f867728a4bf
Now text-based WebVTT also uses the generic media pipeline. This
converts the WebVttTextOutputHandler to a WebVttMuxer to be more
consistent with the other muxer types.
This also allows choosing between single-segment text and multi-segment.
Before, we would generate both and use single-segment for DASH and
multi-segment for HLS; but now you can choose between either and either
are supported in both DASH and HLS.
Change-Id: I6f7edda09e01b5f40e819290d3fe6e88677018d9
This changes it from an OriginHandler to a MediaParser and moves the
handling of it to the Demuxer. This will allow more generic handling
of text by giving it the same abstractions as video/audio handling.
Change-Id: Ibbde3c84d228ec8e83af1ed266ea97dbc9589c24
In addition to the MediaSample handling of the MediaParser, this now
adds callbacks for TextSample. This allows reading text streams from
the media files.
Change-Id: I6c00e286e98bc9aafe05b99cf2f7ce6f89d167a9