This is what webm_subsample_encryption flag actually does.
- Fixes the flag not working problem introduced by previous CLs.
- Also mark flag webm_subsample_encryption as deprecated.
Closes#220
Change-Id: I03ff843786572a91c01b8bd346a3ce50b129118f
Also added a few more command line flags for end to end test:
- no-remove_temp_files_after_test: do not remove test artifacts after
test
- encryption_key, encryption_iv: allow injecting encryption key/iv from
command line
Change-Id: I62084790e10fe6a385b90cb96d9515d8436b2a49
- Add EXT-X-MAP tag for init segment.
- Do not set output field on stream descriptor if not specified on
command line. If it's set (internally) then it gets copied to
MediaInfo that gets passed to the manifest generators.
b/36279481
Change-Id: I762c55b255699ec691817dc4806b0dee2f7504b8
- The flags were overwritten in a loop and therefore only stored the
last value of the flag.
- Change the fields into arrays so that the flags are not overwritten.
Change-Id: I59762d7c84bc3d81bf3b0b5e85ffe689260d970a
This is a fix for issue #216 where the Representation::SlideWindow() logic was
still active even though a static MPD was being asked to be generated (using
the option -generate_static_mpd). This could cause static streams longer than
1800 seconds to start playing at (end-position-for-stream - 1800 seconds).
This change is in compliance with the current usage documentation for the
shaka-packager, which suggests that -time_shift_buffer_depth only is relevant
for dynamic media presentations.
Added a flag --strip_parameter_set_nalus. When enabled, parameter
set NAL units, SPS/PPS for H264 and SPS/PPS/VPS for H265, are stripped
from frames when converting NAL byte stream (AnnexB stream) to NAL
unit stream, which generates avc1/hvc1; otherwise they are not
stripped, and avc3/hev1 is generated.
Parameter set NAL units should not be stripped if they are varying
in the frames otherwise the frames may fail to be decoded.
The flag is enabled by default as we don't usually see varying
SPS/PPS/VPS and it is more space efficient with them stripped.
Set --strip_parameter_set_nalus=false to disable the flag if there
are varying SPS/PPS/VPS in the frames. This addresses #206.
Change-Id: I34bde6f33069f9f77d51a510b39f58a0f0e141aa
The issue could lead to the MPD attribute mediaPresentationDuration being
wrongly generated for WebM streams with a duration longer than INT32_MAX.
The root-cause was that StreamInfo::set_duration() accepted an int instead of a
uint64_t. This seems like a pure typo, since StreamInfo already uses a uint64_t
internally for representing duration.
This CL also removes EncryptionConfig stream data type and merges it
into StreamInfo/SegmentInfo instead.
Change-Id: Idb70ce503e61d3c951225cc78b6b15c084e16dcd
- Disable subsample encryption for VP8 in ISO-BMFF
- Apply block alignment to all subsamples for VP9 in ISO-BMFF,
instead of just superframes.
Change-Id: I8dd31cc16e87abc4d538330eaff9acb0509497df
- luma_weight_l0_flag, luma_weight_l1_flag,
chroma_weight_l0_flag, chroma_weight_l1_flag were there but not set.
- The user should use the values in pred_weight_table_l0 and
pred_weight_table_l1 instead which are set.
Change-Id: Ic9c44fb113717346938a339faf074daa32d4c2d2
Also moved MediaHandler output validation to Initialize instead.
This CL also addresses #122 with consistent chunking.
Change-Id: I60c0da6d1b33421d7828bcb827d18899e71884ce
- WebVttMediaParser uses WebVttSampleConverter to generate non
overlapping media samples.
- The media samples contains ISO BMFF boxes.
- Add kCodecWebVtt to signal that the media is webvtt and
the samples will be in ISO BMFF boxes.
Change-Id: I639902cdba7b04af75428bc20622e26b8203cfb2
Before this, HLS output did not contain language information.
Now, media playlists are properly tagged with a language in the
master playlist.
b/36134267
Change-Id: I172e946dbedd096a44cb2f917b007cc004756228
- Also sets up the packaging and verify it works.
Some of the changes are temporary to get the integration going.
Change-Id: I0cf6c379d185e157808acabb9ef58ff93d4a39ae
The spec actually requires all NAL units to be escaped, so there
is no need to escape it again.
NAL byte stream is one demarcation method; NAL unit stream is another
demarcation method. Regardless of the demarcation method used, the
requirement for NAL unit is the same. See the last paragraph of
7.4.1 NAL unit semantics on the requirement on emulation prevention bytes.
Change-Id: Icc63fcc5cf965632e331f5af5f673164c7c1663a
CENCv3 recommends only encrypting video data in slice NALs. Slice
data partition NALs should not be encrypted.
In the code, differentiate is_vcl and is_video_slice. They are the
same for H265; for H264, vcl NALs include slice data partition NALs
but video_slice NALs do not.
Change-Id: I91f4bdd76d25f0eac50e39aed350ebce3f667121
- Logic for splitting up WebVTT cues into MP4 samples.
- For any cue intervals that overlap, a new sample must be created.
Change-Id: Icf7478f42d8c5790bf6f404d512c1251dd34c405
This handler is a multi-in multi-out handler. If more than one input is
provided, there should be one and only one video stream; also, all inputs
should come from the same thread and are synchronized.
There can be multiple chunking handler running in different threads or even
different processes, we use the "consistent chunking algorithm" to make sure
the chunks in different streams are aligned without explicit communcating
with each other - which is not efficient and often difficult.
Consistent Chunking Algorithm:
1. Find the consistent chunkable boundary
Let the timestamps for video frames be (t1, t2, t3, ...). Then a
consistent chunkable boundary is simply the first chunkable boundary after
(tk / N) != (tk-1 / N), where '/' denotes integer division, and N is the
intended chunk duration.
2. Chunk only at the consistent chunkable boundary
This algorithm will make sure the chunks from different video streams are
aligned if they have aligned GoPs. However, this algorithm will only work
for video streams. To be able to chunk non video streams at similar
positions as video streams, ChunkingHandler is designed to accept one video
input and multiple non video inputs, the non video inputs are chunked when
the video input is chunked. If the inputs are synchronized - which is true
if the inputs come from the same demuxer, the video and non video chunks
are aligned.
Change-Id: Id3bad51ab14f311efdb8713b6cd36d36cf9e4639
This will move ContentProtection element from Representation
to AdaptationSet.
Shaka Player already supports AdaptationSet switching
urn:mpeg:dash:adaptation-set-switching:2016;
ExoPlayer does not support it yet. Filed
https://github.com/google/ExoPlayer/issues/2431 to track the issue.
(ExoPlayer does not like having ContentProtection in Representation
anyway)
Closes b/34691105
Change-Id: I69d0a4d0e15a912a35c8b2686620419a28e4cc99
- Deprecated command line flags --profile and --single_segment.
'segment_template' in stream descriptors implies live profile
and non-single segment.
- Added flag --generate_static_mpd_for_live_profile to generate
static mpd for live profile; if not set, dynamic mpd will be
generated.
Close#142
Change-Id: I78879297ed118f0f246c4753a16ad125bd6b5e4f
- Added two new fields in MpdOptions: dash_profile and mpd_type
- Updated MpdBuilder to support LiveProfile with static mpd
- Command line arguments will be updated in the next CL
Issue #142
Change-Id: Ibd35e5c9e1b1ef98043392e3f04a0af4700135c1