This work was done over ~80 individual commits in the `cmake` branch,
which are now being merged back into `main`. As a roll-up commit, it is
too big to be reviewable, but each change was reviewed individually in
context of the `cmake` branch. After this, the `cmake` branch will be
renamed `cmake-porting-history` and preserved.
---------
Co-authored-by: Geoff Jukes <geoffjukes@users.noreply.github.com>
Co-authored-by: Bartek Zdanowski <bartek.zdanowski@gmail.com>
Co-authored-by: Carlos Bentzen <cadubentzen@gmail.com>
Co-authored-by: Dennis E. Mungai <2356871+Brainiarc7@users.noreply.github.com>
Co-authored-by: Cosmin Stejerean <cstejerean@gmail.com>
Co-authored-by: Carlos Bentzen <carlos.bentzen@bitmovin.com>
Co-authored-by: Cosmin Stejerean <cstejerean@meta.com>
Co-authored-by: Cosmin Stejerean <cosmin@offbytwo.com>
# LL-DASH Support
These changes add support for LL-DASH streaming.
**NOTE:** LL-HLS support is still in progress, but it's coming. :)
## Testing
`./chunking_unittest --gtest_filter="ChunkingHandlerTest.LowLatencyDash"`
`./media_event_unittest --gtest_filter="MpdNotifyMuxerListenerTest.LowLatencyDash"`
`./mpd_unittest --gtest_filter="PeriodTest.LowLatencyDashMpdGetXml"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifyAvailabilityTimeOffset"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifySegmentDuration"`
`./mpd_unittest --gtest_filter="LowLatencySegmentTest.LowLatencySegmentTemplate"`
Note, packager_test must be run from the main project directory
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
Now the same pipeline for handling the audio/videos streams will handle
the segmented text streams too. This doesn't apply to the text output,
only to the MP4 variants. This also fixes a bug where we added the
X-TIMESTAMP-MAP tag even when there wasn't TS streams; this doesn't
otherwise change the behavior around that tag.
Change-Id: I03f7cea56efa42e96311c00841330629a14aa053
In the media handler there is an example of what a one-to-many media
handler would be. It used the trick play handler as an example, but that
handler is now a one-to-one media handler. Changed the example to use the
replicator media handler.
Change-Id: I26908c6e27ea4a697a19c0fa0179c60842a449d2
Made ChainHandlers to MediaHandler::Chain so that it can be used in our
test code as well. After all it is a pretty helpful function.
Change-Id: I8d83ee184052cd9fa9b37f2741c96f3223d5ab48
Ran clang-format over media_handler.cc and media_handler.h so that
later changes won't update the formatting as much.
Change-Id: I2db4f9f4e8a66fca1e1418ab99b283a0e4a70e4c
The previous text to mp4 webvtt pipeline was incomplete. It
did not insert ad cues and it could only insert a segment
after a sample ended.
Now the pipeline supports ad cue insert and segment insertion
mid text sample. This required the pipeline to use the text
chunker (to split samples and insert segments) and required
a major overhaul of the text to mp4 converter.
Before the converter came before the chunker. This meant that
the converter only expected to see stream info and text samples.
Moving the converter after the cue aligner and chunker means
that the convert had to be aware of segments and cues.
The general approach is the same, however the converter will
convert the samples per-segment as the chunker will introduce
duplicate samples if a sample spans across segments.
Closes#362Closes#382
Change-Id: I0f54a40524c36a602ad3804a0da26e80851c92fd
In prep for changes to Trick Play, we want to make all messages
copy on write so that if the same message is sent to multiple
handlers, it is not possible for one handler to change the data
another handler is using.
Change-Id: I554166ca11c532412e4dfced5603972ca24dc2bb
Added a text sample type that can be passed between media handlers.
This will allow text samples to safely be moved between media
handlers before converting to their final format.
Bug: 36138902
Change-Id: Ic4946f774a7d37c43066b9ea46596d5c5f3c05a8
This CL also removes EncryptionConfig stream data type and merges it
into StreamInfo/SegmentInfo instead.
Change-Id: Idb70ce503e61d3c951225cc78b6b15c084e16dcd
- Also sets up the packaging and verify it works.
Some of the changes are temporary to get the integration going.
Change-Id: I0cf6c379d185e157808acabb9ef58ff93d4a39ae
This handler is a multi-in multi-out handler. If more than one input is
provided, there should be one and only one video stream; also, all inputs
should come from the same thread and are synchronized.
There can be multiple chunking handler running in different threads or even
different processes, we use the "consistent chunking algorithm" to make sure
the chunks in different streams are aligned without explicit communcating
with each other - which is not efficient and often difficult.
Consistent Chunking Algorithm:
1. Find the consistent chunkable boundary
Let the timestamps for video frames be (t1, t2, t3, ...). Then a
consistent chunkable boundary is simply the first chunkable boundary after
(tk / N) != (tk-1 / N), where '/' denotes integer division, and N is the
intended chunk duration.
2. Chunk only at the consistent chunkable boundary
This algorithm will make sure the chunks from different video streams are
aligned if they have aligned GoPs. However, this algorithm will only work
for video streams. To be able to chunk non video streams at similar
positions as video streams, ChunkingHandler is designed to accept one video
input and multiple non video inputs, the non video inputs are chunked when
the video input is chunked. If the inputs are synchronized - which is true
if the inputs come from the same demuxer, the video and non video chunks
are aligned.
Change-Id: Id3bad51ab14f311efdb8713b6cd36d36cf9e4639