This removes all chromium dependencies from media/base/ and completes
the build system in CMake.
The ClosureThread class and its classes were removed, as they were
specific to chromium base. ClosureThread has been replaced by
std::thread.
The byte-swapping utilities in network_util.cc have been removed and
replaced with absl.
generate_unique_temp_path() was split out of file_unittest.cc into
file_test_util.cc, where other test suites could make use of it.
WARN_UNUSED_RESULT was replaced with the C++ standard attribute
[[nodiscard]].
The base::Clock interface was replaced with a typedef for a function
pointer that returns the current time.
This re-enables the tests in http_key_fetcher_unittest.cc by using
httpbin.org.
Issue #1047 (CMake porting)
Issue #346 (absl porting)
# LL-DASH Support
These changes add support for LL-DASH streaming.
**NOTE:** LL-HLS support is still in progress, but it's coming. :)
## Testing
`./chunking_unittest --gtest_filter="ChunkingHandlerTest.LowLatencyDash"`
`./media_event_unittest --gtest_filter="MpdNotifyMuxerListenerTest.LowLatencyDash"`
`./mpd_unittest --gtest_filter="PeriodTest.LowLatencyDashMpdGetXml"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifyAvailabilityTimeOffset"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifySegmentDuration"`
`./mpd_unittest --gtest_filter="LowLatencySegmentTest.LowLatencySegmentTemplate"`
Note, packager_test must be run from the main project directory
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
Now the same pipeline for handling the audio/videos streams will handle
the segmented text streams too. This doesn't apply to the text output,
only to the MP4 variants. This also fixes a bug where we added the
X-TIMESTAMP-MAP tag even when there wasn't TS streams; this doesn't
otherwise change the behavior around that tag.
Change-Id: I03f7cea56efa42e96311c00841330629a14aa053
In the media handler there is an example of what a one-to-many media
handler would be. It used the trick play handler as an example, but that
handler is now a one-to-one media handler. Changed the example to use the
replicator media handler.
Change-Id: I26908c6e27ea4a697a19c0fa0179c60842a449d2
Made ChainHandlers to MediaHandler::Chain so that it can be used in our
test code as well. After all it is a pretty helpful function.
Change-Id: I8d83ee184052cd9fa9b37f2741c96f3223d5ab48
Ran clang-format over media_handler.cc and media_handler.h so that
later changes won't update the formatting as much.
Change-Id: I2db4f9f4e8a66fca1e1418ab99b283a0e4a70e4c
The previous text to mp4 webvtt pipeline was incomplete. It
did not insert ad cues and it could only insert a segment
after a sample ended.
Now the pipeline supports ad cue insert and segment insertion
mid text sample. This required the pipeline to use the text
chunker (to split samples and insert segments) and required
a major overhaul of the text to mp4 converter.
Before the converter came before the chunker. This meant that
the converter only expected to see stream info and text samples.
Moving the converter after the cue aligner and chunker means
that the convert had to be aware of segments and cues.
The general approach is the same, however the converter will
convert the samples per-segment as the chunker will introduce
duplicate samples if a sample spans across segments.
Closes#362Closes#382
Change-Id: I0f54a40524c36a602ad3804a0da26e80851c92fd
In prep for changes to Trick Play, we want to make all messages
copy on write so that if the same message is sent to multiple
handlers, it is not possible for one handler to change the data
another handler is using.
Change-Id: I554166ca11c532412e4dfced5603972ca24dc2bb
Added a text sample type that can be passed between media handlers.
This will allow text samples to safely be moved between media
handlers before converting to their final format.
Bug: 36138902
Change-Id: Ic4946f774a7d37c43066b9ea46596d5c5f3c05a8
This CL also removes EncryptionConfig stream data type and merges it
into StreamInfo/SegmentInfo instead.
Change-Id: Idb70ce503e61d3c951225cc78b6b15c084e16dcd
- Also sets up the packaging and verify it works.
Some of the changes are temporary to get the integration going.
Change-Id: I0cf6c379d185e157808acabb9ef58ff93d4a39ae
This handler is a multi-in multi-out handler. If more than one input is
provided, there should be one and only one video stream; also, all inputs
should come from the same thread and are synchronized.
There can be multiple chunking handler running in different threads or even
different processes, we use the "consistent chunking algorithm" to make sure
the chunks in different streams are aligned without explicit communcating
with each other - which is not efficient and often difficult.
Consistent Chunking Algorithm:
1. Find the consistent chunkable boundary
Let the timestamps for video frames be (t1, t2, t3, ...). Then a
consistent chunkable boundary is simply the first chunkable boundary after
(tk / N) != (tk-1 / N), where '/' denotes integer division, and N is the
intended chunk duration.
2. Chunk only at the consistent chunkable boundary
This algorithm will make sure the chunks from different video streams are
aligned if they have aligned GoPs. However, this algorithm will only work
for video streams. To be able to chunk non video streams at similar
positions as video streams, ChunkingHandler is designed to accept one video
input and multiple non video inputs, the non video inputs are chunked when
the video input is chunked. If the inputs are synchronized - which is true
if the inputs come from the same demuxer, the video and non video chunks
are aligned.
Change-Id: Id3bad51ab14f311efdb8713b6cd36d36cf9e4639