- Parse and extract transfer_characteristics from H264/H265 VUI
parameters.
- Set VIDEO-RANGE attribute in HLS according to HLS specification:
https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-02#section-4.4.4.2
- Also added an end to end test.
Fixes#632.
Change-Id: Iadf557d967b42ade321fb0b152e8e7b64fe9ff3e
- Add relevant FOURCCs for Dolby Vision.
- Parse DOVIDecoderConfigurationRecord (dvcC, dvvC) to generate
Dolby Vision codec string.
- Propagate Dolby Vision configs (dvcC, dvvC, hvcE) from Demuxer
to Muxer.
- Add a Dolby Vision end to end test.
Support for backward compatibility signaling in DASH and HLS will be
added in a later CL.
Issue #341
Change-Id: If1385df5f48e04b59cb7661130bea48e26b453bf
Latest version of FFmpeg encodes non standard channel layout, e.g. 5.1(side), in AAC using PCE.
This is now supported with the below changes:
- Allow channel_configuration in ADTS header to be 0, as the cctual channel layout is specified
in PCE.
- Add GetFrameSizeWithoutParsing to determine the frame size before actually parsing the frame.
- Skip and resume later if not the whole frame is available.
- Also ensure that the next frame starts with a proper sync word.
Fixes#598.
- Parses parameter set NAL units in the samples.
- Calculate pixel width and height from track width and height.
Fixes#621, #627.
Change-Id: Ic1e120dccbd220b01168f7bf4effeaa43f95b055
- Define BaseDescriptor and generic read / write operations.
- Define descriptors: ESDescriptor, DecoderConfigDescriptor,
DecoderSpecificInfoDescriptor, SLConfigDescriptor.
DecoderSpecificInfoDescriptor and all other descriptors can now
handle arbitrary length size, not limiting to 64 byte for
DecoderSpecificInfoDescriptor, which was placed to limit
ESDescriptor length size to one byte.
- Now DecoderConfigDescriptor is able to handle reading and writing
of all fields including buffer_size_db, which was not handled
earlier.
Fixes#536.
Change-Id: Ia8a775f8bf6e90e3343a85f0e643bc44cd017c7a
VLC seems to generate access units with extra AUDs. In #526, the below
sequence is seen:
AUD | SPS | PPS | SPS | PPS | AUD | SEI | SEI | SEI | IDR_SLICE
Previously we exit early when seeing AUD, which results in delayed
processing of the access unit.
The behavior is changed to continue processing the following NAL units
to workaround the content issue.
Closes#526.
Change-Id: I80f571c0711c6db1337eb393fce36fae5432b6c5
kFrameSizeCodeTable rows are ordered by 32kHz, 44.1kHz and 48kHz,
which is the reverse of fscod (48kHz, 44.1kHz and 32kHz).
Also updated unittests.
Fixes#487.
Change-Id: Icb0afb8bb895afde0028eee05b403bc85bf7b538
This was introduced earlier to indicate FairPlay protection system. But
in fact, it is sufficient to just use the system id for the indication.
- Also updated various parts of the pipeline to support empty PSSH.
- Added an additional FairPlay end to end test using fMP4.
Change-Id: Ica48b7b5235e9a2b5a7f722bcd0fc1ef2073ac13
The time for the previous segment was used when generating the segment
name. This resulted in the first segment being overwritten and
mismatching manifest and media files. It led to playback problems.
Issue #472.
Change-Id: Ia8130ce261585e1a2ede83b26de3e32508de087f
The VP9 level is computed when the container is missing a codec config
or if the level is missing from the codec config.
This fixes VP9 in ISO-BMFF files generated by FFmpeg v4.0.2 or earlier
which does not have level set in the codec config.
Fixes#469.
Change-Id: I685bfd48be16ee6b2209da1c3173f7d6bb02b36a
Implemented per AV1 Codec ISO Media File Format Binding at
https://aomediacodec.github.io/av1-isobmff/
And AOM AV1 codec mapping in Matroska/WebM at
https://github.com/Matroska-Org/matroska-specification/blob/av1-mappin/codec/av1.md
Note that AV1 specific boxes are not supported in this CL, i.e.
AV1 Forward Key Frame sample group entry 'av1f', AV1 Multi-Frame
sample group entry 'av1m' etc are not supported. These boxes are optional.
We will add support later if they are useful to the clients / players.
Encryption is not supported yet.
Issue #453.
Change-Id: I630432d0a9bf82d263ffaf40e57f67fc65eee902
Negative duration is not allowed, so set the duration of that sample to
an arbitrary small value in case it is needed to decode future samples.
Issue #451.
Change-Id: I9250d71d163f769ea2657d56e108b6dbd583de67
Note that STYLE and REGION are not supported in mp4 container due to
spec limitation as 14496-30:2014 does not specify a way to signal
styles/regions inside mp4.
Closes#344.
Change-Id: I05c14df916f7b2c7ca4364ee9407e0eda4dc7a3f
- Also fixed compilations in Alpine Linux and other flavors of Linux.
- Added container versions in docker files to always use a verified
version.
Closes#164.
Change-Id: I949a8709e4d70c49129c9c2e8608dd78193d964c
In some ISO-BMFF files, there is an initial non-zero composition offset,
but there is no EditList present.
This is against ISO-BMFF spec recommentation [1] and we believe in most
cases it is just missing the EditList.
[1] 14496-12:2015 8.6.6.1
It is recommended that such an edit be used to establish a presentation
time of 0 for the first presented sample, when composition offsets are
used.
Issue: #112.
Change-Id: I178d5ec9d8c294c9f70aac4f4dd6254c824e2255
Configurable with --transport_stream_offset_ms.
This is needed to compensate for possible negative timestamps in
inputs, which could happen on ISO-BMFF with EditLists.
Issue #112.
Change-Id: I0fce8766c9df2911b9bb859c1e54052a8ed2abfb
In some ISO-BMFF files, there is an initial non-zero composition offset,
but there is no EditList present.
This is against ISO-BMFF spec recommentation [1] and we believe in most
cases it is just missing the EditList.
[1] 14496-12:2015 8.6.6.1
It is recommended that such an edit be used to establish a presentation
time of 0 for the first presented sample, when composition offsets are
used.
Issue: #112.
Fixes: b/110782437.
Change-Id: I23d33810ce536b09a1e22a2644828d824c1314f5
- EditLists in input files are parsed and applied to sample timestamps.
- An EditList will be inserted in the ISO-BMFF output if
- There is an offset between the initial presentation timestamp (pts)
and decoding timestamp (dts). Chrome, as of M67, still uses dts in
buffered range API [1], which creates various problems when buffered
range by pts does not align with buffered range by dts. There is
another bug in Chrome that applies EditList to pts only [2]. This
means that we can insert an EditList to align pts range and dts range.
- MediaSamples have negative timestamps (e.g. for Audio Priming).
You may notice the below change on some contents:
- Some media duration is reduced by one or two frames. This is because
EditList in the input file was ignored in the previous code, so video
streams start with a zero dts and a non-zero pts; the smaller of dts
and pts was used as the starting timestamp (related to the earlier
workaround for Chrome's dts bug), so the calculated duration was
actually a bit larger than the actual duration. Now with EditList
applied, the initial pts is reduced to zero, so the media duration is
also reduced to reflect the actual and correct media duration.
It may also result in negative timestamps in TS/HLS Packed Audio, which
will be addressed in a follow up CL.
Fixes#112.
Partially address b/110782437.
[1] https://crbug.com/718641, fixed but behind MseBufferByPts.
[2] https://crbug.com/354518. Chrome is planning to enable the fix for
[1] before addressing this bug, so we are safe.
Change-Id: I59317740ad3807ca66fa74b3a18fdf7f32c96aeb
WebVTT cues without payload may not carry meaningful information, but it
is allowed by WebVTT specification [1]. It could also be useful
sometimes, e.g. to signal the time progression in live case.
Fixes#433.
[1] https://www.w3.org/TR/webvtt1/#types-of-webvtt-cue-payload
Change-Id: I9e31f4a3789cbdafb7667b64f4019834190ecfc0
Before we had an assert that would catch if a sample had an
invalid times, however input may have bad times.
We did have a message if we saw a sample with a duration equal to
zero. This expands that check to check if the time is valid in
general and will ignore any sample that is not valid.
Fixes#425
Change-Id: I9774bfbdbd401f3016d2c345665b9973d1889db7
This flag was designed for two purpose:
- Grouping fragments into subsegments, achieving three level hierarchy:
segment < subsegment < fragment.
- Indicate whether to generate 'sidx' box in media segments (when the
value is set to a negative number).
There are no practical use case for the first purpose. Removing it to
simplify the code and reduce the confusion.
Introduce another flag --generate_sidx_in_media_segments for the second
purpose.
Change-Id: I4be7cd42662fb324c1158b978e05768ee49dd048
It was implemented to workaround Chromium's DTS
https://crbug.com/398130, but the workaround does not really work in
all situations.
Remove it now as we already have another workaround available.
Change-Id: I291f559d78120fb743a6679b7d927e5bbc5b6b4e
Previously, the text padder media handler would assume that text always
started at time zero. This would work for VOD but would result with a
large pad at the start of LIVE content.
To avoid this, the text padder will use a bias to test whether or not it
thinks the content starts at zero. Right now the bias is set to be 10
minutes, but will later be configurable with a command line flag.
10 minutes was used as LIVE content will have much larger values and VOD
content should have much lower values.
Issue: #416
Change-Id: I07af15a577392fb030e36f052085cd4e667700e8
Added the key frame field to the IsMediaSample mather. All
tests that do not care about a sample being a keyframe (or not)
have been updated to use "_".
Change-Id: I44180687c58c260b6856e683d647f532227b14d5
Updated all the stream data matcher we use in our unit tests
to allow us to use matchers in them. We are now able to use "_"
to ignore specific parameters.
With this we were able to replace the different version of
matchers for each stream data type with a single instance for
each type.
Includes updates to printing strings to the listener. Strings
now go through a "pretty" function to help make it easier to
read them in the output.
Change-Id: I146351b54fccd63ab9ec936877e6c6b30f9aa9fc
Make the text stream info factory method in media_handler_test_base
require the caller to specify the time scale.
Issue: #399
Change-Id: Ibdfb183e0aa3f4ff50edf6b58c4e9b966006c6d2
To ensure that every variable in a box is explicitly set
every variable has been assigned a default in the header.
Change-Id: Iaa806c4058ac6621a64363a00040fbd9903c6710
Text Samples with no payload should be ignored, so this adds a
test to check that samples with no payload get treated the
same as a gap.
As long as this case is true, using gaps in our other tests should
functionally be the same as using samples with no payload.
Change-Id: Ic16b240c43eda2514b537a2d938d4135638adc4e
Before we used the sample payload for each text sample as we were
focusing on the times rather than the contents.
As we look to add tests that rely on specific sample payloads, we
need to change the tests to explicitly set the payload for each
sample.
Change-Id: I24174686f46535cf6c2d59a18308101a3bb51c87
EPT (earliest presentation time) may be adjusted not to be lower than
the decoding timestamp (dts), but the adjustment should only be done
on the first file when there is one file per Representation per Period.
The second file and onwards should not be adjusted otherwise a GAP
would be created.
Closes#384
Closes b/78517422
Change-Id: I56771ad8fbbe6a87b832ec58854cfbf37d5f1817
In our text to mp4 tests, we were only checking if the times on
the samples lined-up with what we were expecting.
We want to check that text sample contents (ids, settings, and
payloads) were correctly merged into a single media sample. To
verify this, we now check if the sample ids appear in the
media sample.
Change-Id: Ica1a85a14e7b116275e3571332b2e90d7bc44c45
In our text to mp4 tests, we used the same sample id for each sample,
this changes it so that each sample (within a single test) has a
unique id.
This is done in preparation to look for the ids in the created
media samples.
Change-Id: I3215a6f09279af8f40e1ce8a959e0a522a811173
The previous text to mp4 webvtt pipeline was incomplete. It
did not insert ad cues and it could only insert a segment
after a sample ended.
Now the pipeline supports ad cue insert and segment insertion
mid text sample. This required the pipeline to use the text
chunker (to split samples and insert segments) and required
a major overhaul of the text to mp4 converter.
Before the converter came before the chunker. This meant that
the converter only expected to see stream info and text samples.
Moving the converter after the cue aligner and chunker means
that the convert had to be aware of segments and cues.
The general approach is the same, however the converter will
convert the samples per-segment as the chunker will introduce
duplicate samples if a sample spans across segments.
Closes#362Closes#382
Change-Id: I0f54a40524c36a602ad3804a0da26e80851c92fd
In the text sample box (for mp4) there was a value called
"data_reference_index" that was never initialized. This meant
that it took on various values can caused different results
between runs.
Change-Id: I4b18ac97ec4700f6e651b14898ef250713a4253c
Renamed all the files called "webvtt_output_handler*" to
"webvtt_text_output_handler*" to better reflect the class name
in them.
Change-Id: I977bab362076974a124f263bcefff716ed8b6a0f
We don't want to allow any handler to be copyable or assigned-over
so this change enforces that for the webvtt output handler.
Change-Id: Ie0d59d6dbfb7a5e00bb4dd1422cd696d1a2d6072
Before, the webvtt output handler was written so that it could
share code between a segmented and non-segmented handler. As
we are not worried about that right now, this change simplifies
the handler to just be about segmented output.
Change-Id: I29dbc4e3a4ffbeb7ea10e23db489ee74b398a6c4
Instead of failing immediately, ignore unsupported audio codec when
parsing the source file, as there may be more than one stream in the
source file. This allows the supported streams to be packaged.
Closes#395.
Change-Id: I01005a93a19012c19065251647c9b06dd25c673a
The Cue class was from a previous WebVTT implementation and
is not used in the current implementation. It was missed when the
other classes were removed. This change removes it.
Change-Id: I661ab3fcd80b5e5ef98b5213746b341a4028d1a1
When originally implementing the webvtt parser, there was a
misunderstanding in what the BOM was suppose to be
(https://en.wikipedia.org/wiki/Byte_order_mark). This corrects the
misunderstanding.
Close#397
Change-Id: I250d392db228e5e9b86684614b57adc5d8a4e5fe
This is in preparation of supporting entitlement license API, where
common encryption server may return concatenated PSSHs directly.
Refactored ProtectionSystemSpecificInfo into a struct containing
concatenated PSSHs. This will make it easier to pass PSSHs around.
Also, most of the time, users of ProtectionSystemSpecificInfo do
not care what is in PSSH; so moved PSSH box parsing and building out
of ProtectionSystemSpecificInfo.
b/78171767
Change-Id: I1c4d5e7e23efd2f7d4b2b9704378323112e47f00
Packager uses ThreadedIO to write media segments and manifest /
playlists. There was a possibility that media segments write being
delayed and scheduled after updating manifest / playlists.
This CL fixes the race condition.
Also added a note on how segments can be synced to cloud storage to
avoid the race condition during file sync.
Also added a live WebM test.
Fixes#386.
Change-Id: Icf9c38cdec715fa3dc2836eab1511131e129fe41
To ensure that we can parse content with style and region blocks,
this change updates the parser to skip those blocks so that we
can still parse the cues from a file.
Full style and region support will be added later this year.
Issue #380
Change-Id: I11b8fd862a108c27a5c67b15d4703532b44a1214
Instead, the actual earliest presentation time is used except for
the first segment if there is an offset between presentation time
(pts) and decoding time (dts).
Chrome (as of v66) reports dts instead of pts in buffered ranges in
MSE API. To avoid breaking Chrome, the earliest_presentation_time
of the first segment is set to its dts as Chrome does not like negative
values for
adjusted dts = dts + Period@start (0 for the first period)
- presentationTimeOffset (earliest_presentation_time).
Fixes#303.
Change-Id: I5ca80e05d5570961400499436f2bcc01f06e69e0
The WebVtt Output Handler did not recognize cue events. This change
allows the handler to accept the events and tell muxer listener
about them.
Issue #362
Change-Id: I7c3318b72e539adc19af587c8e213fdb0af8290b
Created a media handler to come after parsers that will handle filling
in gaps between text samples. The padder takes a min duration, and if
the samples do not cover the min duration when flushed, one last empty
sample will be injected so that the samples will go up to the min duration.
Change-Id: I88605059664d09279676edac418ff3d4990d7556
Move the webvtt segmenter to the chunking directory so that it
can be converted to a general purpose text chunker.
Change-Id: I9ecd7ee39cb73070dab07b64f65ef24af1404813
Now that we have the end-to-end tests, we no longer need the webvtt pipeline
tests to verify that it is working.
Change-Id: I4ebec34e66eda67c40999d8802b447e2551e1fa6
This is called when reaching end of the file of the media. The duration
of the media is updated.
Fixes#340.
Change-Id: I446f2d341b02125d4a7d8c958bda269b5403cb9c
We have an assert that ensures that the end time is greater than
the start time for any cue. However we never checked that cues
had a non-zero duration when parsing them.
We will throw away cues with a duration of zero (and print a
warning message) as they are not spec compliant.
Closes: #335
Change-Id: I404e8f3a5a8d43eff75a2554db3e38e8d340f421
In the video captured by Android's default camera app, meta box is written as
as Box instead of FullBox specified in the spec.
Closes#319.
Change-Id: I526492fdd505d5929c5161cb1ed1503b724de7e9
Looks like Safari does not like v0 tenc box.
- Use v1 tenc box for cbcs and cens protection_scheme as required by
CENCv3 spec.
- Set crypt_byte_block and skip_byte_block to 0 for full sample
encryption cbcs and cens.
Fixes#326
Change-Id: I5581cd856fffc4ff104d950f3ca19b9337d57a78
When a gap is found in the text stream, the WebVtt to Mp4 converter will
now output the special empty vtt cue.
Change-Id: I8be88c6b7589aa120a2215e1e4b8e98031fe326d
Closes: #324
- Add empty lines between different types of renditions to improve
readability.
- Group variants with the same audio/text group together, as it is
where the Adaptation occurs.
- Write master playlist after writing media playlists. This makes
more sense and it is also necessary to have the bandwidth of
the last iframe playist segment correctly computed.
- For fMP4, I-Frame segment must include the 'moof' header.
- Fix a problem that hls_iframe_playlist_name is not passed to
MuxerListenerFactory.
Issue: #287
Change-Id: Icf37c5de1dc29f85ae3f419cbc3264d04ca491a4
Adding all the code needed to allow the webvtt to mp4 converter to
send empty vtt cues. This change does everything except writing the
mp4 box.
Bug: #324
Change-Id: I16188e6357632b2ed06f7e9bab7844f093266696
Instead of always tracking segments with an index, this change makes a struct
that will act as the segment and track which segments are in it. Each
segment will store all the samples in the order they were given, it will
avoid any sorting.
Closes: 72867775
Change-Id: Ic5829161510fe8f3320d960c3bc4a276c26ff3be
Create WebVtt test that shows that the current implementation of the
webvtt segmenter does not respect the input order of cues.
Bug: 72867775
Change-Id: I811b3a93c10650e0cf9290c9c0c1680f562deb30
According to the HLS+WebVTT spec, if there is no text in a segment
a webvtt file with no cues can be added to the manifest. By outputting
empty segments it allows the accumulated duration in the Master Playlist
to better represent the duration of the text stream.
Spec Reference: //tools.ietf.org/html/draft-pantos-http-live-streaming-23
Section 3.5. WebVTT
Bug: #205
Change-Id: I5de01200fd9fa99c57949c773e8ee926b0f6ba8a
Use a mock muxer listener so that we can verify that the webvtt
output handlers will write to a manifest correctly.
Change-Id: I0294a998bfaf06a6d8f7d4c287fb014839b38f73
Connected the WebVtt Pipeline together to test that the parser,
segmenter, and output all work together as expected.
Change-Id: I07138f7b5b1318f84c27c5b607d8df207c57ddb3
- Update curl to 7.57.0
- Roll clang, which is needed due to MacOS / XCode update
- Fixes and suppress compilation errors due to clang update
Fixes#285
Change-Id: Ibac3288c641861605c3c0500d34d27373e6eecfe
This change introduces handlers to output WebVtt text files. There is
only one output but there is a common base to support others.
WebVttOutputHandler which handles all communication with other handlers
and WebVttSegmentedOutputHandler is responsible for listening for events
and choosing when and where to write cues and headers.
Bug: 36138902
Change-Id: I2b13a94262554398e66fee8cf024aa21041ddbab
Instead of having multiple char readers, one for strings and
one for files, just have one for files and use memory files
when testing.
Change-Id: Id1a2230046ba540ddf69ca10edb3edc74d2419b6
To better align with the Chunking Handler, the Text Segmenter
now outputs the SegmentInfo at the end of the segment rather
than the start.
Change-Id: If69ab951947d00779b4b63a52c4b6662bbdc4c0d
Implemented a MediaHandler that takes text samples and creates
media samples. The data in each media sample is the MP4 box for
non-overlapping cues.
As per WebVtt in Mp4, all cues must be non-overlapping. This handler
takes care of grouping and dividing cues.
Bug: 36138902
Change-Id: I0c1d27964180c14a22cb200591f70e46e04a651f
This change creates the webvtt segmenter which will be used for
segmented text webvtt for HLS. It takes a stream of text samples
and injects segment info messages based on the segment duration.
It is possible for samples to extend between segments, if so the
sample will appear twice (once in each segment).
Change-Id: Iae0134ee61cf269948026086520b6d3f8ce3785b
Took the WebVTT Media Parser and created the WebVTT Parser
that will take in a file and output a stream of cues that
will later be passed to another Media Handler that takes in
cues and chunks them.
Bug: 36138902
Change-Id: Ic77813fe19678e85d500269e69f46917510ab7ec
Seeing some failures on some platforms when compiled with clang
disabled:
GYP_DEFINES="clang=0" gclient runhooks
Several changes to make it work:
1. Mark packager code with packager_code=1 in GYP definitions.
2. Disable a few checks in non-packager code, which we do not have
direct control: dangling-else, deprecated-declarations,
unused-function
3. Fix the relevant errors in packager code.
4. Revert HAVE_STROPTS_H in curl config which is not available in
all linux distributions.
Fixes#286Fixes#293
Change-Id: I729b41f99403c5ad9487c6cc4a7dc06f6323cef8
The error is logged only once to avoid log spamming.
Added definitions for the unsupported stream types for easy look up.
Change-Id: I097e2f05759bc84ef03f264cfabd2fb20da7c711
The spec allows having more than one 'mdat' boxes even if it is not
used.
This is happening on some mp4 files in the wild:
http://www.sample-videos.com/.
Fixes#298.
Change-Id: I729cc94bee095560b5c4b3ee8511323f25f7ad5a