# LL-DASH Support
These changes add support for LL-DASH streaming.
**NOTE:** LL-HLS support is still in progress, but it's coming. :)
## Testing
`./chunking_unittest --gtest_filter="ChunkingHandlerTest.LowLatencyDash"`
`./media_event_unittest --gtest_filter="MpdNotifyMuxerListenerTest.LowLatencyDash"`
`./mpd_unittest --gtest_filter="PeriodTest.LowLatencyDashMpdGetXml"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifyAvailabilityTimeOffset"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifySegmentDuration"`
`./mpd_unittest --gtest_filter="LowLatencySegmentTest.LowLatencySegmentTemplate"`
Note, packager_test must be run from the main project directory
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
This also allows setting the language of different text streams from
the same input. Multiple streams can use the same input stream
using different cc_index values and can each use a different language.
This also will try to pull the language from the input if not
specified.
Change-Id: I7078710b509b7d77dad8cb4299a82f954af7e9e7
I.e. the flag --generate_sidx_in_media_segments,
--nogenerate_sidx_in_media_segments work for both single-segment
and multi-segment mode with this change.
Related to #862.
Change-Id: Icd27fd00e8e036ba0c4709b48650372429cc0351
This changes the default MP4 output to use TTML and adds a way to
choose which one is used. This is done with 'format=ttml+mp4' or
'format=vtt+mp4'.
This also fixes the boxes output in WebVTT in MP4.
Change-Id: Ieaa7fc44fbf4dc020a5bb70cfa3578ec10e088ce
This only supports TTML output; meaning the user can convert WebVTT
into TTML, but not the other way around. This will be useful for
DVB-sub subtitles that would be better supported within TTML.
This only adds text-based output; a follow-up will add MP4 support.
Change-Id: I0944b7df95d7765e55f203fc5e9a644f5c455dd8
We currently have a bug about non-deterministic output in the MPD
generator. This works around that bug by optionally doing everything
in a single thread. This allows us to run manifest comparisons without
making the major changes needed to add that feature.
Issue #177
Change-Id: I10e1084dac77841220161fbd2575cdcb5c13c00e
Now text-based WebVTT also uses the generic media pipeline. This
converts the WebVttTextOutputHandler to a WebVttMuxer to be more
consistent with the other muxer types.
This also allows choosing between single-segment text and multi-segment.
Before, we would generate both and use single-segment for DASH and
multi-segment for HLS; but now you can choose between either and either
are supported in both DASH and HLS.
Change-Id: I6f7edda09e01b5f40e819290d3fe6e88677018d9
Now the same pipeline for handling the audio/videos streams will handle
the segmented text streams too. This doesn't apply to the text output,
only to the MP4 variants. This also fixes a bug where we added the
X-TIMESTAMP-MAP tag even when there wasn't TS streams; this doesn't
otherwise change the behavior around that tag.
Change-Id: I03f7cea56efa42e96311c00841330629a14aa053
This changes it from an OriginHandler to a MediaParser and moves the
handling of it to the Demuxer. This will allow more generic handling
of text by giving it the same abstractions as video/audio handling.
Change-Id: Ibbde3c84d228ec8e83af1ed266ea97dbc9589c24
Opening a named pipe can block until both ends are open, and we cannot
control when the other end will be open. Ideally, we would always
open files in a thread so that Packager can be used with piped inputs
from naive applications without a potential deadlock.
This change will defer opening WebVTT files until the parser Run()
method is called from a thread. This way, WebVTT files being sent in
from a pipe will never be able to block the main thread.
Previously, files were opened on the main thread before calling the
parser constructor, passing the open file to the constructor as an
argument. I also tried doing it in the parser's InitializeInternal()
method, but that is also called from the main thread.
Change-Id: I54cc68ed9d48a8dc697829119be84d4065b1ae1c
Add dash_accessibilities stream descriptor, which is a semi-colon
separated list of accessibility_scheme_id_uri=value. It is optional.
Add dash_roles stream descriptor, which is a semi-colon separated
list of strings. It is optional.
Closes#565.
Change-Id: Idb1c20bb410fdd016db07e11fe507c102a3dd8ea
We have logics in bandwidth calculation to ignore segments that is
smaller than half of target duration. The logic does not have any
effect right now as the target duration in mpd/hls params is always
zero.
This change will set target duration in mpd/hls params, thus it can fix
part of issue #581 as the last segment which is less than half of
target duration.
Issue #581.
Fixes#498.
Change-Id: Ieb2dbf4da9fc72a7b9de802cda4294f1954d29b4
It allows users to override the default language for text tracks.
If not specified, --default_language applies to both audio and text
tracks.
Issue #430.
Change-Id: I86a9baba2072be27b6661fa7b65a8bc8b6adb3cc
Add hls_characteristics stream descriptor, which is a colon or semi-colon
separated list of strings. It is optional.
Fixes#430.
Change-Id: Ifcf79316e68768ff065891933de565cd0ff32ec4
https://tools.ietf.org/html/rfc8216#section-4.1
The peak segment bit rate of a Media Playlist is the largest bit rate
of any contiguous set of segments whose total duration is between 0.5
and 1.5 times the target duration.
Fixes#498.
Change-Id: I1f28972b9cc5977735e47906bdcd88ba3942db5a
Note that TTML in ISO-BMFF is not supported yet.
Also updated packager_test.py:
- Added a test using TTML passthrough.
- Computed output extension from input extension unless output_format
is specified.
Fixes#478.
Change-Id: Ia917fc4ed3c326782791ed67601fba02ea28b11d
This also ensures that it does not violate std::sort() requirement on
strict ordering, which is enforced in gcc/g++ though not in clang.
std::sort() strict ordering requirement:
std::sort() need a comparator that return true iff the first argument is
strictly lower than the second one. That is: must return false when they
are equal.
Change-Id: I781cf4ed4125fcad212eba5430a264f3a3d71c16
Configurable with --transport_stream_offset_ms.
This is needed to compensate for possible negative timestamps in
inputs, which could happen on ISO-BMFF with EditLists.
Issue #112.
Change-Id: I0fce8766c9df2911b9bb859c1e54052a8ed2abfb
Previously, the text padder media handler would assume that text always
started at time zero. This would work for VOD but would result with a
large pad at the start of LIVE content.
To avoid this, the text padder will use a bias to test whether or not it
thinks the content starts at zero. Right now the bias is set to be 10
minutes, but will later be configurable with a command line flag.
10 minutes was used as LIVE content will have much larger values and VOD
content should have much lower values.
Issue: #416
Change-Id: I07af15a577392fb030e36f052085cd4e667700e8
When we built the cue alignment media handler, we did not remove
the old solution.
This change removes all the code for the old solution.
Change-Id: I851b284c449c7d25aaabc2f55df5579ba7b5aad1
Made ChainHandlers to MediaHandler::Chain so that it can be used in our
test code as well. After all it is a pretty helpful function.
Change-Id: I8d83ee184052cd9fa9b37f2741c96f3223d5ab48
Before, the text chunker would assume that all text streams were in
MS, which is not a safe assumption to make.
This changes it to take the time scale from StreamInfo and work
natively with scaled time units.
This required updating the tests. While doing so the tests were
rewritten with the goal to make them easier to read.
Closes: #399
Change-Id: Ib792ad306f40d749763418cde645337913a6046b
- Also fixed documentation missing --hls_playlist_type LIVE for HLS
live examples.
- Also updated packager.cc to use RETURN_IF_ERROR macro for
consistency.
Fixes#347Fixes#403
Change-Id: Idbccd7137b873170cd54e2c780bd554d25031a0b
Since the WebVtt mp4 to mp4 path did not include all the features
we needed and we know of no-one who is asking for it. We are opted
to remove the path so that there will be an error rather than
incorrect output if they try to use it.
Issue #405
Change-Id: Id2c37bb385c514dd8e31f7d3bd75fb3904b70d78
The previous text to mp4 webvtt pipeline was incomplete. It
did not insert ad cues and it could only insert a segment
after a sample ended.
Now the pipeline supports ad cue insert and segment insertion
mid text sample. This required the pipeline to use the text
chunker (to split samples and insert segments) and required
a major overhaul of the text to mp4 converter.
Before the converter came before the chunker. This meant that
the converter only expected to see stream info and text samples.
Moving the converter after the cue aligner and chunker means
that the convert had to be aware of segments and cues.
The general approach is the same, however the converter will
convert the samples per-segment as the chunker will introduce
duplicate samples if a sample spans across segments.
Closes#362Closes#382
Change-Id: I0f54a40524c36a602ad3804a0da26e80851c92fd
Allow the output file to contain template identifiers like $Number$,
$Time$ etc. Unlike the actual template used in SegmentTemplate, the
identifiers in output will be populated before pushing to the DASH
manifest.
Issue: #384
Change-Id: Ife1caadb6fccd32167fa1bc83fe2afcb2d2ad087
Renamed all the files called "webvtt_output_handler*" to
"webvtt_text_output_handler*" to better reflect the class name
in them.
Change-Id: I977bab362076974a124f263bcefff716ed8b6a0f
Before, the webvtt output handler was written so that it could
share code between a segmented and non-segmented handler. As
we are not worried about that right now, this change simplifies
the handler to just be about segmented output.
Change-Id: I29dbc4e3a4ffbeb7ea10e23db489ee74b398a6c4
Requiring output format determined from 'output' to be consistent with
output format determined from 'segment_template'.
Change-Id: I32cbd63fcd6e2a4272dd0db531c1d5b385315445
To move us toward no longer need to ensure order when building
our pipeline, use a map to share demuxers between stream descriptors.
This will even allow use to use the same demuxers in the text pipelines
while still building them separately from the audio and video streams.
Change-Id: I4d4dbddbc06adee36cbe7f4aa1f6769f7bb2a3f6
It is not always possible to align segment duration to target duration
exactly. For example, for AAC with sampling rate of 44100, there are
always 1024 audio frames per sample, so the sample duration is
1024/44100. For a target duration of 2 seconds, the closest segment
duration would be 1.984 or 2.00533.
This feature allows MPD generator to treat these segments as having
the same duration, thus allows MPD generator to generate less
SegmentTimeline entries and potentially no SegmentTimeline entries
(replaced with SegmentTemplate@duration instead if
--segment_template_constant_duration flag is enabled).
Under flag --allow_approximate_segment_timeline. Disabled by default.
Fixes#330.
Change-Id: I5044eaa348ebbf45bf792a2af53fc95a115ae21b
Two-character ISO-639 code in --default_language was ignored due to
a bug in language code matching as the language code in stream is
always converted to 3-character code.
Fixes#371.
Change-Id: I8618938af583a417446636ff9efe1c72ce822c33
Created "Chain" function to connect a series of handlers together so
that the connection is easier to read.
Changed other calls to use RETURN_IF_ERROR to help make it easier to
read as we no longer need to track the status variable anymore.
Change-Id: Iffb76ca395b6d62f8feb054c09470c77718e8feb
Created a media handler to come after parsers that will handle filling
in gaps between text samples. The padder takes a min duration, and if
the samples do not cover the min duration when flushed, one last empty
sample will be injected so that the samples will go up to the min duration.
Change-Id: I88605059664d09279676edac418ff3d4990d7556