This also allows setting the language of different text streams from
the same input. Multiple streams can use the same input stream
using different cc_index values and can each use a different language.
This also will try to pull the language from the input if not
specified.
Change-Id: I7078710b509b7d77dad8cb4299a82f954af7e9e7
Note that this only supports a single page within the DVB-sub stream.
Multiple pages will be merged together. A follow-up will allow
selecting a specific page.
This only supports outputting using TTML or MP4+TTML; you cannot have
DVB-sub output nor can you output it in WebVTT. Since DVB-sub
uses images, it is hard to impossible to do this with WebVTT.
This also only supports interlaced images, not progressive images
nor text.
Closes#832
Change-Id: Id6dbb6393c7b9a05722e61c6bd255bef5e69a7d8
Issue #149
Co-authored-by: Andreas Motl <andreas.motl@elmyra.de>
Co-authored-by: Rintaro Kuroiwa <rkuroiwa@google.com>
Co-authored-by: Ole Andre Birkedal <o.birkedal@sportradar.com>
I.e. the flag --generate_sidx_in_media_segments,
--nogenerate_sidx_in_media_segments work for both single-segment
and multi-segment mode with this change.
Related to #862.
Change-Id: Icd27fd00e8e036ba0c4709b48650372429cc0351
The reference count in 'sidx' box is a uint16 field, which allows at
most 0xFFFF entries, i.e. at most 0xFFFF subsegments, which is roughly
18 hours for one second segments.
Do not fail packaging when it happens. Instead, generate a warning and
truncate the number of references to 0xFFFF instead.
Note that the actual number of mp4 fragments in the mp4 file can still
be more than 0xFFFF. The stream will not play to the end in DASH, but
it will play successfully in HLS.
Workarounds #862.
Change-Id: Ib3930418d1528df1f9ea64cda0d0ebaa78d26abb
This also changes the callbacks a bit to (a) avoid passing references
for already ref-counted types, and (b) don't pass PID since the
parent knows this and gives it to the child parser.
Issue #832
Change-Id: I7dd44436c8d1ad81d42a813d16f850175b85ad1a
This changes the default MP4 output to use TTML and adds a way to
choose which one is used. This is done with 'format=ttml+mp4' or
'format=vtt+mp4'.
This also fixes the boxes output in WebVTT in MP4.
Change-Id: Ieaa7fc44fbf4dc020a5bb70cfa3578ec10e088ce
This only supports TTML output; meaning the user can convert WebVTT
into TTML, but not the other way around. This will be useful for
DVB-sub subtitles that would be better supported within TTML.
This only adds text-based output; a follow-up will add MP4 support.
Change-Id: I0944b7df95d7765e55f203fc5e9a644f5c455dd8
This adds a new path when parsing MPEG2-TS streams to ignore unsupported
streams. This allows extracting supported streams when some of the
streams are unsupported. For example, you can extract audio from a
file that has unsupported video.
Change-Id: I608fcb19d0a573bfd35e9272f60b0b69346ae11a
This adds more generic settings for regions and CSS styles. These are
global settings, so they go on the StreamInfo object.
Change-Id: Ibb76c060206152ccf8e9a067c09877226f67c927
Now text cues are composed of nested fragments that can be individually
styled. This allows portions of the cue to be bold, etc. The
WebVTT parser doesn't parse the inputs, but the original tags are
preserved in WebVTT output. The WebVTT output will add tags if the
style elements are present in the cue object.
Change-Id: I6abba4175e376e4f753193f7d8cac63e958d3c89
Now the Cue settings are a generic object that is parsed in WebVTT.
This will allow setting the settings in different parsers without having
to use WebVTT-specifics.
Change-Id: I36689bec725bd2e515af962b7174fc5977f96fa2
This sets the groundwork for more generic text cues by having a more
generic object for the settings and the body. This also changes the
TextSample to be immutable and accepts the fields in the constructor
instead of using setters.
Change-Id: I76b09ce8e8471a49e6bf447e8c187f867728a4bf
Now text-based WebVTT also uses the generic media pipeline. This
converts the WebVttTextOutputHandler to a WebVttMuxer to be more
consistent with the other muxer types.
This also allows choosing between single-segment text and multi-segment.
Before, we would generate both and use single-segment for DASH and
multi-segment for HLS; but now you can choose between either and either
are supported in both DASH and HLS.
Change-Id: I6f7edda09e01b5f40e819290d3fe6e88677018d9
This changes it from an OriginHandler to a MediaParser and moves the
handling of it to the Demuxer. This will allow more generic handling
of text by giving it the same abstractions as video/audio handling.
Change-Id: Ibbde3c84d228ec8e83af1ed266ea97dbc9589c24
In addition to the MediaSample handling of the MediaParser, this now
adds callbacks for TextSample. This allows reading text streams from
the media files.
Change-Id: I6c00e286e98bc9aafe05b99cf2f7ce6f89d167a9
Instead of having the text readers reading from the file directly, they
now accept the data as a stream.
Change-Id: Id1b32c867a8058a68ae7aab5c568f77672a4401d
Opening a named pipe can block until both ends are open, and we cannot
control when the other end will be open. Ideally, we would always
open files in a thread so that Packager can be used with piped inputs
from naive applications without a potential deadlock.
This change will defer opening WebVTT files until the parser Run()
method is called from a thread. This way, WebVTT files being sent in
from a pipe will never be able to block the main thread.
Previously, files were opened on the main thread before calling the
parser constructor, passing the open file to the constructor as an
argument. I also tried doing it in the parser's InitializeInternal()
method, but that is also called from the main thread.
Change-Id: I54cc68ed9d48a8dc697829119be84d4065b1ae1c
Under command line flag --mvex_before_trak.
This is needed to workaround Android MediaExtractor bug which
requires |mvex| to appear before |trak|.
Closes#711.
Change-Id: Id41d71af5c0016f59023dda6408bbf502e12ac55
Added Dolby Vision backward compatible signalling, i.e. for Dolby Vision
profile 8, both base codec without Dolby Vision and HDR codec with Dolby
Vision are signalled.
This is achieved by using a new MuxerListener implementation
MultiCodecMuxerListener, which wraps multiple child MuxerListeners and
is able to delegate to the child MuxerListeners based on the codecs in
the stream.
Closes#341.
Change-Id: I1967bb1ed503087cdd011c364e5fb5647d516ca4
- Parse and extract transfer_characteristics from H264/H265 VUI
parameters.
- Set VIDEO-RANGE attribute in HLS according to HLS specification:
https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-02#section-4.4.4.2
- Also added an end to end test.
Fixes#632.
Change-Id: Iadf557d967b42ade321fb0b152e8e7b64fe9ff3e
- Add relevant FOURCCs for Dolby Vision.
- Parse DOVIDecoderConfigurationRecord (dvcC, dvvC) to generate
Dolby Vision codec string.
- Propagate Dolby Vision configs (dvcC, dvvC, hvcE) from Demuxer
to Muxer.
- Add a Dolby Vision end to end test.
Support for backward compatibility signaling in DASH and HLS will be
added in a later CL.
Issue #341
Change-Id: If1385df5f48e04b59cb7661130bea48e26b453bf
Latest version of FFmpeg encodes non standard channel layout, e.g. 5.1(side), in AAC using PCE.
This is now supported with the below changes:
- Allow channel_configuration in ADTS header to be 0, as the cctual channel layout is specified
in PCE.
- Add GetFrameSizeWithoutParsing to determine the frame size before actually parsing the frame.
- Skip and resume later if not the whole frame is available.
- Also ensure that the next frame starts with a proper sync word.
Fixes#598.
- Parses parameter set NAL units in the samples.
- Calculate pixel width and height from track width and height.
Fixes#621, #627.
Change-Id: Ic1e120dccbd220b01168f7bf4effeaa43f95b055