Issue #149
Co-authored-by: Andreas Motl <andreas.motl@elmyra.de>
Co-authored-by: Rintaro Kuroiwa <rkuroiwa@google.com>
Co-authored-by: Ole Andre Birkedal <o.birkedal@sportradar.com>
I.e. the flag --generate_sidx_in_media_segments,
--nogenerate_sidx_in_media_segments work for both single-segment
and multi-segment mode with this change.
Related to #862.
Change-Id: Icd27fd00e8e036ba0c4709b48650372429cc0351
The reference count in 'sidx' box is a uint16 field, which allows at
most 0xFFFF entries, i.e. at most 0xFFFF subsegments, which is roughly
18 hours for one second segments.
Do not fail packaging when it happens. Instead, generate a warning and
truncate the number of references to 0xFFFF instead.
Note that the actual number of mp4 fragments in the mp4 file can still
be more than 0xFFFF. The stream will not play to the end in DASH, but
it will play successfully in HLS.
Workarounds #862.
Change-Id: Ib3930418d1528df1f9ea64cda0d0ebaa78d26abb
This also changes the callbacks a bit to (a) avoid passing references
for already ref-counted types, and (b) don't pass PID since the
parent knows this and gives it to the child parser.
Issue #832
Change-Id: I7dd44436c8d1ad81d42a813d16f850175b85ad1a
This changes the default MP4 output to use TTML and adds a way to
choose which one is used. This is done with 'format=ttml+mp4' or
'format=vtt+mp4'.
This also fixes the boxes output in WebVTT in MP4.
Change-Id: Ieaa7fc44fbf4dc020a5bb70cfa3578ec10e088ce
This only supports TTML output; meaning the user can convert WebVTT
into TTML, but not the other way around. This will be useful for
DVB-sub subtitles that would be better supported within TTML.
This only adds text-based output; a follow-up will add MP4 support.
Change-Id: I0944b7df95d7765e55f203fc5e9a644f5c455dd8
This adds a new path when parsing MPEG2-TS streams to ignore unsupported
streams. This allows extracting supported streams when some of the
streams are unsupported. For example, you can extract audio from a
file that has unsupported video.
Change-Id: I608fcb19d0a573bfd35e9272f60b0b69346ae11a
This adds more generic settings for regions and CSS styles. These are
global settings, so they go on the StreamInfo object.
Change-Id: Ibb76c060206152ccf8e9a067c09877226f67c927
Now text cues are composed of nested fragments that can be individually
styled. This allows portions of the cue to be bold, etc. The
WebVTT parser doesn't parse the inputs, but the original tags are
preserved in WebVTT output. The WebVTT output will add tags if the
style elements are present in the cue object.
Change-Id: I6abba4175e376e4f753193f7d8cac63e958d3c89
Now the Cue settings are a generic object that is parsed in WebVTT.
This will allow setting the settings in different parsers without having
to use WebVTT-specifics.
Change-Id: I36689bec725bd2e515af962b7174fc5977f96fa2
This sets the groundwork for more generic text cues by having a more
generic object for the settings and the body. This also changes the
TextSample to be immutable and accepts the fields in the constructor
instead of using setters.
Change-Id: I76b09ce8e8471a49e6bf447e8c187f867728a4bf
Now text-based WebVTT also uses the generic media pipeline. This
converts the WebVttTextOutputHandler to a WebVttMuxer to be more
consistent with the other muxer types.
This also allows choosing between single-segment text and multi-segment.
Before, we would generate both and use single-segment for DASH and
multi-segment for HLS; but now you can choose between either and either
are supported in both DASH and HLS.
Change-Id: I6f7edda09e01b5f40e819290d3fe6e88677018d9
This changes it from an OriginHandler to a MediaParser and moves the
handling of it to the Demuxer. This will allow more generic handling
of text by giving it the same abstractions as video/audio handling.
Change-Id: Ibbde3c84d228ec8e83af1ed266ea97dbc9589c24
In addition to the MediaSample handling of the MediaParser, this now
adds callbacks for TextSample. This allows reading text streams from
the media files.
Change-Id: I6c00e286e98bc9aafe05b99cf2f7ce6f89d167a9
Instead of having the text readers reading from the file directly, they
now accept the data as a stream.
Change-Id: Id1b32c867a8058a68ae7aab5c568f77672a4401d
Opening a named pipe can block until both ends are open, and we cannot
control when the other end will be open. Ideally, we would always
open files in a thread so that Packager can be used with piped inputs
from naive applications without a potential deadlock.
This change will defer opening WebVTT files until the parser Run()
method is called from a thread. This way, WebVTT files being sent in
from a pipe will never be able to block the main thread.
Previously, files were opened on the main thread before calling the
parser constructor, passing the open file to the constructor as an
argument. I also tried doing it in the parser's InitializeInternal()
method, but that is also called from the main thread.
Change-Id: I54cc68ed9d48a8dc697829119be84d4065b1ae1c
Under command line flag --mvex_before_trak.
This is needed to workaround Android MediaExtractor bug which
requires |mvex| to appear before |trak|.
Closes#711.
Change-Id: Id41d71af5c0016f59023dda6408bbf502e12ac55
Added Dolby Vision backward compatible signalling, i.e. for Dolby Vision
profile 8, both base codec without Dolby Vision and HDR codec with Dolby
Vision are signalled.
This is achieved by using a new MuxerListener implementation
MultiCodecMuxerListener, which wraps multiple child MuxerListeners and
is able to delegate to the child MuxerListeners based on the codecs in
the stream.
Closes#341.
Change-Id: I1967bb1ed503087cdd011c364e5fb5647d516ca4
- Parse and extract transfer_characteristics from H264/H265 VUI
parameters.
- Set VIDEO-RANGE attribute in HLS according to HLS specification:
https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-02#section-4.4.4.2
- Also added an end to end test.
Fixes#632.
Change-Id: Iadf557d967b42ade321fb0b152e8e7b64fe9ff3e
- Add relevant FOURCCs for Dolby Vision.
- Parse DOVIDecoderConfigurationRecord (dvcC, dvvC) to generate
Dolby Vision codec string.
- Propagate Dolby Vision configs (dvcC, dvvC, hvcE) from Demuxer
to Muxer.
- Add a Dolby Vision end to end test.
Support for backward compatibility signaling in DASH and HLS will be
added in a later CL.
Issue #341
Change-Id: If1385df5f48e04b59cb7661130bea48e26b453bf
Latest version of FFmpeg encodes non standard channel layout, e.g. 5.1(side), in AAC using PCE.
This is now supported with the below changes:
- Allow channel_configuration in ADTS header to be 0, as the cctual channel layout is specified
in PCE.
- Add GetFrameSizeWithoutParsing to determine the frame size before actually parsing the frame.
- Skip and resume later if not the whole frame is available.
- Also ensure that the next frame starts with a proper sync word.
Fixes#598.
- Parses parameter set NAL units in the samples.
- Calculate pixel width and height from track width and height.
Fixes#621, #627.
Change-Id: Ic1e120dccbd220b01168f7bf4effeaa43f95b055
- Define BaseDescriptor and generic read / write operations.
- Define descriptors: ESDescriptor, DecoderConfigDescriptor,
DecoderSpecificInfoDescriptor, SLConfigDescriptor.
DecoderSpecificInfoDescriptor and all other descriptors can now
handle arbitrary length size, not limiting to 64 byte for
DecoderSpecificInfoDescriptor, which was placed to limit
ESDescriptor length size to one byte.
- Now DecoderConfigDescriptor is able to handle reading and writing
of all fields including buffer_size_db, which was not handled
earlier.
Fixes#536.
Change-Id: Ia8a775f8bf6e90e3343a85f0e643bc44cd017c7a
VLC seems to generate access units with extra AUDs. In #526, the below
sequence is seen:
AUD | SPS | PPS | SPS | PPS | AUD | SEI | SEI | SEI | IDR_SLICE
Previously we exit early when seeing AUD, which results in delayed
processing of the access unit.
The behavior is changed to continue processing the following NAL units
to workaround the content issue.
Closes#526.
Change-Id: I80f571c0711c6db1337eb393fce36fae5432b6c5
kFrameSizeCodeTable rows are ordered by 32kHz, 44.1kHz and 48kHz,
which is the reverse of fscod (48kHz, 44.1kHz and 32kHz).
Also updated unittests.
Fixes#487.
Change-Id: Icb0afb8bb895afde0028eee05b403bc85bf7b538
This was introduced earlier to indicate FairPlay protection system. But
in fact, it is sufficient to just use the system id for the indication.
- Also updated various parts of the pipeline to support empty PSSH.
- Added an additional FairPlay end to end test using fMP4.
Change-Id: Ica48b7b5235e9a2b5a7f722bcd0fc1ef2073ac13
The time for the previous segment was used when generating the segment
name. This resulted in the first segment being overwritten and
mismatching manifest and media files. It led to playback problems.
Issue #472.
Change-Id: Ia8130ce261585e1a2ede83b26de3e32508de087f
The VP9 level is computed when the container is missing a codec config
or if the level is missing from the codec config.
This fixes VP9 in ISO-BMFF files generated by FFmpeg v4.0.2 or earlier
which does not have level set in the codec config.
Fixes#469.
Change-Id: I685bfd48be16ee6b2209da1c3173f7d6bb02b36a
Implemented per AV1 Codec ISO Media File Format Binding at
https://aomediacodec.github.io/av1-isobmff/
And AOM AV1 codec mapping in Matroska/WebM at
https://github.com/Matroska-Org/matroska-specification/blob/av1-mappin/codec/av1.md
Note that AV1 specific boxes are not supported in this CL, i.e.
AV1 Forward Key Frame sample group entry 'av1f', AV1 Multi-Frame
sample group entry 'av1m' etc are not supported. These boxes are optional.
We will add support later if they are useful to the clients / players.
Encryption is not supported yet.
Issue #453.
Change-Id: I630432d0a9bf82d263ffaf40e57f67fc65eee902
Negative duration is not allowed, so set the duration of that sample to
an arbitrary small value in case it is needed to decode future samples.
Issue #451.
Change-Id: I9250d71d163f769ea2657d56e108b6dbd583de67
Note that STYLE and REGION are not supported in mp4 container due to
spec limitation as 14496-30:2014 does not specify a way to signal
styles/regions inside mp4.
Closes#344.
Change-Id: I05c14df916f7b2c7ca4364ee9407e0eda4dc7a3f
- Also fixed compilations in Alpine Linux and other flavors of Linux.
- Added container versions in docker files to always use a verified
version.
Closes#164.
Change-Id: I949a8709e4d70c49129c9c2e8608dd78193d964c
In some ISO-BMFF files, there is an initial non-zero composition offset,
but there is no EditList present.
This is against ISO-BMFF spec recommentation [1] and we believe in most
cases it is just missing the EditList.
[1] 14496-12:2015 8.6.6.1
It is recommended that such an edit be used to establish a presentation
time of 0 for the first presented sample, when composition offsets are
used.
Issue: #112.
Change-Id: I178d5ec9d8c294c9f70aac4f4dd6254c824e2255
Configurable with --transport_stream_offset_ms.
This is needed to compensate for possible negative timestamps in
inputs, which could happen on ISO-BMFF with EditLists.
Issue #112.
Change-Id: I0fce8766c9df2911b9bb859c1e54052a8ed2abfb
In some ISO-BMFF files, there is an initial non-zero composition offset,
but there is no EditList present.
This is against ISO-BMFF spec recommentation [1] and we believe in most
cases it is just missing the EditList.
[1] 14496-12:2015 8.6.6.1
It is recommended that such an edit be used to establish a presentation
time of 0 for the first presented sample, when composition offsets are
used.
Issue: #112.
Fixes: b/110782437.
Change-Id: I23d33810ce536b09a1e22a2644828d824c1314f5
- EditLists in input files are parsed and applied to sample timestamps.
- An EditList will be inserted in the ISO-BMFF output if
- There is an offset between the initial presentation timestamp (pts)
and decoding timestamp (dts). Chrome, as of M67, still uses dts in
buffered range API [1], which creates various problems when buffered
range by pts does not align with buffered range by dts. There is
another bug in Chrome that applies EditList to pts only [2]. This
means that we can insert an EditList to align pts range and dts range.
- MediaSamples have negative timestamps (e.g. for Audio Priming).
You may notice the below change on some contents:
- Some media duration is reduced by one or two frames. This is because
EditList in the input file was ignored in the previous code, so video
streams start with a zero dts and a non-zero pts; the smaller of dts
and pts was used as the starting timestamp (related to the earlier
workaround for Chrome's dts bug), so the calculated duration was
actually a bit larger than the actual duration. Now with EditList
applied, the initial pts is reduced to zero, so the media duration is
also reduced to reflect the actual and correct media duration.
It may also result in negative timestamps in TS/HLS Packed Audio, which
will be addressed in a follow up CL.
Fixes#112.
Partially address b/110782437.
[1] https://crbug.com/718641, fixed but behind MseBufferByPts.
[2] https://crbug.com/354518. Chrome is planning to enable the fix for
[1] before addressing this bug, so we are safe.
Change-Id: I59317740ad3807ca66fa74b3a18fdf7f32c96aeb
WebVTT cues without payload may not carry meaningful information, but it
is allowed by WebVTT specification [1]. It could also be useful
sometimes, e.g. to signal the time progression in live case.
Fixes#433.
[1] https://www.w3.org/TR/webvtt1/#types-of-webvtt-cue-payload
Change-Id: I9e31f4a3789cbdafb7667b64f4019834190ecfc0
Before we had an assert that would catch if a sample had an
invalid times, however input may have bad times.
We did have a message if we saw a sample with a duration equal to
zero. This expands that check to check if the time is valid in
general and will ignore any sample that is not valid.
Fixes#425
Change-Id: I9774bfbdbd401f3016d2c345665b9973d1889db7
This flag was designed for two purpose:
- Grouping fragments into subsegments, achieving three level hierarchy:
segment < subsegment < fragment.
- Indicate whether to generate 'sidx' box in media segments (when the
value is set to a negative number).
There are no practical use case for the first purpose. Removing it to
simplify the code and reduce the confusion.
Introduce another flag --generate_sidx_in_media_segments for the second
purpose.
Change-Id: I4be7cd42662fb324c1158b978e05768ee49dd048
It was implemented to workaround Chromium's DTS
https://crbug.com/398130, but the workaround does not really work in
all situations.
Remove it now as we already have another workaround available.
Change-Id: I291f559d78120fb743a6679b7d927e5bbc5b6b4e
Previously, the text padder media handler would assume that text always
started at time zero. This would work for VOD but would result with a
large pad at the start of LIVE content.
To avoid this, the text padder will use a bias to test whether or not it
thinks the content starts at zero. Right now the bias is set to be 10
minutes, but will later be configurable with a command line flag.
10 minutes was used as LIVE content will have much larger values and VOD
content should have much lower values.
Issue: #416
Change-Id: I07af15a577392fb030e36f052085cd4e667700e8
Added the key frame field to the IsMediaSample mather. All
tests that do not care about a sample being a keyframe (or not)
have been updated to use "_".
Change-Id: I44180687c58c260b6856e683d647f532227b14d5
Updated all the stream data matcher we use in our unit tests
to allow us to use matchers in them. We are now able to use "_"
to ignore specific parameters.
With this we were able to replace the different version of
matchers for each stream data type with a single instance for
each type.
Includes updates to printing strings to the listener. Strings
now go through a "pretty" function to help make it easier to
read them in the output.
Change-Id: I146351b54fccd63ab9ec936877e6c6b30f9aa9fc
Make the text stream info factory method in media_handler_test_base
require the caller to specify the time scale.
Issue: #399
Change-Id: Ibdfb183e0aa3f4ff50edf6b58c4e9b966006c6d2
To ensure that every variable in a box is explicitly set
every variable has been assigned a default in the header.
Change-Id: Iaa806c4058ac6621a64363a00040fbd9903c6710
Text Samples with no payload should be ignored, so this adds a
test to check that samples with no payload get treated the
same as a gap.
As long as this case is true, using gaps in our other tests should
functionally be the same as using samples with no payload.
Change-Id: Ic16b240c43eda2514b537a2d938d4135638adc4e
Before we used the sample payload for each text sample as we were
focusing on the times rather than the contents.
As we look to add tests that rely on specific sample payloads, we
need to change the tests to explicitly set the payload for each
sample.
Change-Id: I24174686f46535cf6c2d59a18308101a3bb51c87
EPT (earliest presentation time) may be adjusted not to be lower than
the decoding timestamp (dts), but the adjustment should only be done
on the first file when there is one file per Representation per Period.
The second file and onwards should not be adjusted otherwise a GAP
would be created.
Closes#384
Closes b/78517422
Change-Id: I56771ad8fbbe6a87b832ec58854cfbf37d5f1817
In our text to mp4 tests, we were only checking if the times on
the samples lined-up with what we were expecting.
We want to check that text sample contents (ids, settings, and
payloads) were correctly merged into a single media sample. To
verify this, we now check if the sample ids appear in the
media sample.
Change-Id: Ica1a85a14e7b116275e3571332b2e90d7bc44c45
In our text to mp4 tests, we used the same sample id for each sample,
this changes it so that each sample (within a single test) has a
unique id.
This is done in preparation to look for the ids in the created
media samples.
Change-Id: I3215a6f09279af8f40e1ce8a959e0a522a811173
The previous text to mp4 webvtt pipeline was incomplete. It
did not insert ad cues and it could only insert a segment
after a sample ended.
Now the pipeline supports ad cue insert and segment insertion
mid text sample. This required the pipeline to use the text
chunker (to split samples and insert segments) and required
a major overhaul of the text to mp4 converter.
Before the converter came before the chunker. This meant that
the converter only expected to see stream info and text samples.
Moving the converter after the cue aligner and chunker means
that the convert had to be aware of segments and cues.
The general approach is the same, however the converter will
convert the samples per-segment as the chunker will introduce
duplicate samples if a sample spans across segments.
Closes#362Closes#382
Change-Id: I0f54a40524c36a602ad3804a0da26e80851c92fd
In the text sample box (for mp4) there was a value called
"data_reference_index" that was never initialized. This meant
that it took on various values can caused different results
between runs.
Change-Id: I4b18ac97ec4700f6e651b14898ef250713a4253c
Renamed all the files called "webvtt_output_handler*" to
"webvtt_text_output_handler*" to better reflect the class name
in them.
Change-Id: I977bab362076974a124f263bcefff716ed8b6a0f
We don't want to allow any handler to be copyable or assigned-over
so this change enforces that for the webvtt output handler.
Change-Id: Ie0d59d6dbfb7a5e00bb4dd1422cd696d1a2d6072
Before, the webvtt output handler was written so that it could
share code between a segmented and non-segmented handler. As
we are not worried about that right now, this change simplifies
the handler to just be about segmented output.
Change-Id: I29dbc4e3a4ffbeb7ea10e23db489ee74b398a6c4
Instead of failing immediately, ignore unsupported audio codec when
parsing the source file, as there may be more than one stream in the
source file. This allows the supported streams to be packaged.
Closes#395.
Change-Id: I01005a93a19012c19065251647c9b06dd25c673a
The Cue class was from a previous WebVTT implementation and
is not used in the current implementation. It was missed when the
other classes were removed. This change removes it.
Change-Id: I661ab3fcd80b5e5ef98b5213746b341a4028d1a1
When originally implementing the webvtt parser, there was a
misunderstanding in what the BOM was suppose to be
(https://en.wikipedia.org/wiki/Byte_order_mark). This corrects the
misunderstanding.
Close#397
Change-Id: I250d392db228e5e9b86684614b57adc5d8a4e5fe
This is in preparation of supporting entitlement license API, where
common encryption server may return concatenated PSSHs directly.
Refactored ProtectionSystemSpecificInfo into a struct containing
concatenated PSSHs. This will make it easier to pass PSSHs around.
Also, most of the time, users of ProtectionSystemSpecificInfo do
not care what is in PSSH; so moved PSSH box parsing and building out
of ProtectionSystemSpecificInfo.
b/78171767
Change-Id: I1c4d5e7e23efd2f7d4b2b9704378323112e47f00
Packager uses ThreadedIO to write media segments and manifest /
playlists. There was a possibility that media segments write being
delayed and scheduled after updating manifest / playlists.
This CL fixes the race condition.
Also added a note on how segments can be synced to cloud storage to
avoid the race condition during file sync.
Also added a live WebM test.
Fixes#386.
Change-Id: Icf9c38cdec715fa3dc2836eab1511131e129fe41
To ensure that we can parse content with style and region blocks,
this change updates the parser to skip those blocks so that we
can still parse the cues from a file.
Full style and region support will be added later this year.
Issue #380
Change-Id: I11b8fd862a108c27a5c67b15d4703532b44a1214
Instead, the actual earliest presentation time is used except for
the first segment if there is an offset between presentation time
(pts) and decoding time (dts).
Chrome (as of v66) reports dts instead of pts in buffered ranges in
MSE API. To avoid breaking Chrome, the earliest_presentation_time
of the first segment is set to its dts as Chrome does not like negative
values for
adjusted dts = dts + Period@start (0 for the first period)
- presentationTimeOffset (earliest_presentation_time).
Fixes#303.
Change-Id: I5ca80e05d5570961400499436f2bcc01f06e69e0
The WebVtt Output Handler did not recognize cue events. This change
allows the handler to accept the events and tell muxer listener
about them.
Issue #362
Change-Id: I7c3318b72e539adc19af587c8e213fdb0af8290b