Generate an audio only master playlist if there are no videos and
subtitles.
We do not support mixing audio only EXT-X-STREAM-INF with video
EXT-X-STREAM-INF right now.
Fixes#461.
Change-Id: I999b335ad7abbe183ffcb0f5d471948977c2772f
Add hls_characteristics stream descriptor, which is a colon or semi-colon
separated list of strings. It is optional.
Fixes#430.
Change-Id: Ifcf79316e68768ff065891933de565cd0ff32ec4
Only Marlin Adaptive Streaming Specification – Simple Profile is
supported.
Two additional updates:
- Remove FairPlay ContentProtection element from DASH mpd as FairPlay
does not define a signaling in DASH.
- Updated end to end test to include all DRMs we support.
Closes#381.
Change-Id: Id12269b471ea34983b782cbd92f687332292ef59
This was introduced earlier to indicate FairPlay protection system. But
in fact, it is sufficient to just use the system id for the indication.
- Also updated various parts of the pipeline to support empty PSSH.
- Added an additional FairPlay end to end test using fMP4.
Change-Id: Ica48b7b5235e9a2b5a7f722bcd0fc1ef2073ac13
Note that TTML in ISO-BMFF is not supported yet.
Also updated packager_test.py:
- Added a test using TTML passthrough.
- Computed output extension from input extension unless output_format
is specified.
Fixes#478.
Change-Id: Ia917fc4ed3c326782791ed67601fba02ea28b11d
The time for the previous segment was used when generating the segment
name. This resulted in the first segment being overwritten and
mismatching manifest and media files. It led to playback problems.
Issue #472.
Change-Id: Ia8130ce261585e1a2ede83b26de3e32508de087f
The VP9 level is computed when the container is missing a codec config
or if the level is missing from the codec config.
This fixes VP9 in ISO-BMFF files generated by FFmpeg v4.0.2 or earlier
which does not have level set in the codec config.
Fixes#469.
Change-Id: I685bfd48be16ee6b2209da1c3173f7d6bb02b36a
Implemented per AV1 Codec ISO Media File Format Binding at
https://aomediacodec.github.io/av1-isobmff/
And AOM AV1 codec mapping in Matroska/WebM at
https://github.com/Matroska-Org/matroska-specification/blob/av1-mappin/codec/av1.md
Note that AV1 specific boxes are not supported in this CL, i.e.
AV1 Forward Key Frame sample group entry 'av1f', AV1 Multi-Frame
sample group entry 'av1m' etc are not supported. These boxes are optional.
We will add support later if they are useful to the clients / players.
Encryption is not supported yet.
Issue #453.
Change-Id: I630432d0a9bf82d263ffaf40e57f67fc65eee902
Note that STYLE and REGION are not supported in mp4 container due to
spec limitation as 14496-30:2014 does not specify a way to signal
styles/regions inside mp4.
Closes#344.
Change-Id: I05c14df916f7b2c7ca4364ee9407e0eda4dc7a3f
This also ensures that it does not violate std::sort() requirement on
strict ordering, which is enforced in gcc/g++ though not in clang.
std::sort() strict ordering requirement:
std::sort() need a comparator that return true iff the first argument is
strictly lower than the second one. That is: must return false when they
are equal.
Change-Id: I781cf4ed4125fcad212eba5430a264f3a3d71c16
It is possible to see a negative timestamp if the input mp4 file
contains EditList.
Also increase the queue size for EncryptionKey to be able to handle
smaller crypto_period_duration during testing.
Change-Id: I68278bf482d6662e771d9b80f4d5409605065aac
Configurable with --transport_stream_offset_ms.
This is needed to compensate for possible negative timestamps in
inputs, which could happen on ISO-BMFF with EditLists.
Issue #112.
Change-Id: I0fce8766c9df2911b9bb859c1e54052a8ed2abfb
In some ISO-BMFF files, there is an initial non-zero composition offset,
but there is no EditList present.
This is against ISO-BMFF spec recommentation [1] and we believe in most
cases it is just missing the EditList.
[1] 14496-12:2015 8.6.6.1
It is recommended that such an edit be used to establish a presentation
time of 0 for the first presented sample, when composition offsets are
used.
Issue: #112.
Fixes: b/110782437.
Change-Id: I23d33810ce536b09a1e22a2644828d824c1314f5
- EditLists in input files are parsed and applied to sample timestamps.
- An EditList will be inserted in the ISO-BMFF output if
- There is an offset between the initial presentation timestamp (pts)
and decoding timestamp (dts). Chrome, as of M67, still uses dts in
buffered range API [1], which creates various problems when buffered
range by pts does not align with buffered range by dts. There is
another bug in Chrome that applies EditList to pts only [2]. This
means that we can insert an EditList to align pts range and dts range.
- MediaSamples have negative timestamps (e.g. for Audio Priming).
You may notice the below change on some contents:
- Some media duration is reduced by one or two frames. This is because
EditList in the input file was ignored in the previous code, so video
streams start with a zero dts and a non-zero pts; the smaller of dts
and pts was used as the starting timestamp (related to the earlier
workaround for Chrome's dts bug), so the calculated duration was
actually a bit larger than the actual duration. Now with EditList
applied, the initial pts is reduced to zero, so the media duration is
also reduced to reflect the actual and correct media duration.
It may also result in negative timestamps in TS/HLS Packed Audio, which
will be addressed in a follow up CL.
Fixes#112.
Partially address b/110782437.
[1] https://crbug.com/718641, fixed but behind MseBufferByPts.
[2] https://crbug.com/354518. Chrome is planning to enable the fix for
[1] before addressing this bug, so we are safe.
Change-Id: I59317740ad3807ca66fa74b3a18fdf7f32c96aeb
I.e. enable --generate_dash_if_iop_compliant_mpd in mpd_generator by
default.
It was enabled in the main packager binary in v1.6.1.
Change-Id: Icfb19758c52c107e8ab17d3fb581923fa291cc0a
- Support decryption verification in _CheckTestResults
- Allow diff_files in _CheckTestResults
- Update all functional tests to use _CheckTestResults
Change-Id: I3f9c02f35808eba787becf9b1e5c1ce9238f943e
Instead, caclulating average bandwidth by dividing the sum of the
sizes of every segment by the sum of the durations of every segment.
This aligns with the requirement in HLS spec:
https://tools.ietf.org/html/draft-pantos-http-live-streaming-23 4.1.
BandwidthEstimator is also simplified to handle all blocks only.
Fixes#361
Change-Id: I89e7d415a841f4d4048f199de8dae7ffa250467b
DTS was used in ChunkingHandler. As a result, SegmentInfo contained
timestamp in DTS. MP4Muxer has a logic to change SegmentInfo to use
PTS but not in other muxers.
Benefits of using PTS in ChunkingHandler:
- De-dup the redundant logic in MP4Muxer
- Ensure consistent behavior in different output containers
- Consistent with other timestamps, e.g. Ad Cue timestamps
Issue #413
Change-Id: Ib671badf144e0c0866d60f4ff0ac0cbbdd33817e
Having "wvtt" in the codec string (in the master playlist) causes
errors on some older Apple products. As including it is optional,
we are opted to omit it to ensure support for all Apple products.
Close#402
Change-Id: Ib1072bcc26a3ff66e3a6d3204789c0c8c678d4db
Configurable under flag --use_legacy_vp9_codec_string, which defaults
to false as all major browsers and platforms support new style vp09
codec string already.
Closes#406.
Change-Id: I22e917777f9d66db815ff9d55eb47b6d55806269
EPT (earliest presentation time) may be adjusted not to be lower than
the decoding timestamp (dts), but the adjustment should only be done
on the first file when there is one file per Representation per Period.
The second file and onwards should not be adjusted otherwise a GAP
would be created.
Closes#384
Closes b/78517422
Change-Id: I56771ad8fbbe6a87b832ec58854cfbf37d5f1817
The previous text to mp4 webvtt pipeline was incomplete. It
did not insert ad cues and it could only insert a segment
after a sample ended.
Now the pipeline supports ad cue insert and segment insertion
mid text sample. This required the pipeline to use the text
chunker (to split samples and insert segments) and required
a major overhaul of the text to mp4 converter.
Before the converter came before the chunker. This meant that
the converter only expected to see stream info and text samples.
Moving the converter after the cue aligner and chunker means
that the convert had to be aware of segments and cues.
The general approach is the same, however the converter will
convert the samples per-segment as the chunker will introduce
duplicate samples if a sample spans across segments.
Closes#362Closes#382
Change-Id: I0f54a40524c36a602ad3804a0da26e80851c92fd
Allow the output file to contain template identifiers like $Number$,
$Time$ etc. Unlike the actual template used in SegmentTemplate, the
identifiers in output will be populated before pushing to the DASH
manifest.
Issue: #384
Change-Id: Ife1caadb6fccd32167fa1bc83fe2afcb2d2ad087
- Add command line flag --test_packager_version to inject packager
version; packager_main.cc is also updated to use the same flag.
- Also add a MpdGenerator test in packager_test.py.
Change-Id: I9a11a0ee5502ba30a8acc4d44ebbfaabbe0f2f6e
Previously for the last iframe in a segment, we wait for the next
segment to arrive before writing the EXTINF tag. If an Ad Cue comes
in before the next segment, the EXT-X-PLACEMENT_OPPORTUNITY tag would
be inserted before the iframe in previous segment.
Fixes#378, #396.
Change-Id: I1ede72a4d4edca94781c7b05bc25397d67916d1a
Problem : Text samples have variable length and therefore act
more like continuous samples whereas audio and video
act more like discrete samples. Since we use sample
start time, a cue event could be inserted after the
start time of the last text sample and never get
inserted as there are no more samples.
Change : After all streams have requested flushing, we make sure
to collect all remaining cue events from the sync point
queue and insert them into each stream.
Issue #362
Change-Id: Id8f136f7ef53531f7a7f412613eac352324e0130
Create an end-to-end test for ad cues. This test's final result is not
correct but illustrates the problem we have in the cue insertion and will
be fixed by a later CL.
Change-Id: Ia8b43a53848941be52cf9ade018668e6477e8df2
In H265Parser::ParseSliceHeader, the parser does not handle
byte_alignment() from the spec. byte_alignment() reportedly contains
at least one bit, which is not handled right now.
See Section 7.3.2.12 Rec. ITU-T H.265 v3 (04/2015).
Also added a few size sanity checks in H265Parser to make sure the
code does not crash if an invalid input is provided.
Fixes#383.
Change-Id: I33b31396058fc5ba67a0fc119be5fe56ec9443b0
Packager uses ThreadedIO to write media segments and manifest /
playlists. There was a possibility that media segments write being
delayed and scheduled after updating manifest / playlists.
This CL fixes the race condition.
Also added a note on how segments can be synced to cloud storage to
avoid the race condition during file sync.
Also added a live WebM test.
Fixes#386.
Change-Id: Icf9c38cdec715fa3dc2836eab1511131e129fe41
The number of preserved segments outside live window can be
configured using flag --preserved_segments_outside_live_window,
which is default to 50, i.e. 5 minutes for 6s segment.
Note that the segment removal will be disabled if it is set to 0.
Only HLS live playlist and DASH dynamic MPD are affected by this flag.
- Also add end to end tests.
Fixes#223.
Change-Id: I8a566efebe2f1552c7d9509ab017bade5a4a1c98
Removed the logic in MuxerListener to estimate bandwidth from file
size and duration, since it is not compliant to the spec.
MpdBuilder will estimate bandwidth from segment size and duration
if bandwidth is not specified in MediaInfo.
Here is the statement from DASH spec (23009-1:2014):
Consider a hypothetical constant bitrate channel of
bandwidth with the value of this attribute in bits per second
(bps). Then, if the Representation is continuously delivered
at this bitrate, starting at any SAP that is indicated either by
@startwithsap or by any Segment Index box, a client can
be assured of having enough data for continuous playout
providing playout begins after @minbuffertime *
@bandwidth bits have been received (i.e. at time
@minbuffertime after the first bit is received).
For dependent Representations this value specifies the
bandwidth according to the above definition for the
aggregation of this Representation and all complementary
Representations.
Fixes#376.
Change-Id: I0fddce39e709d0cded0a4c9ae59adbbcc97ec5ea
According to DASH spec (23009-1:2014):
Consider a hypothetical constant bitrate channel of
bandwidth with the value of this attribute in bits per second
(bps). Then, if the Representation is continuously delivered
at this bitrate, starting at any SAP that is indicated either by
@startwithsap or by any Segment Index box, a client can
be assured of having enough data for continuous playout
providing playout begins after @minbuffertime *
@bandwidth bits have been received (i.e. at time
@minbuffertime after the first bit is received).
For dependent Representations this value specifies the
bandwidth according to the above definition for the
aggregation of this Representation and all complementary
Representations.
This suggests that max bitrate should be used instead of average
bitrate.
Also cleaned up BandwidthEstimator code.
Fixes#376.
Change-Id: Ibf5896394c5c6bb820849771a2129c59202d2273