This is an automated sync of common workflows for this organization.
The upstream source is:
8bfe75f0d2
Co-authored-by: Shaka Bot <shaka-bot@users.noreply.github.com>
Note:
* An xHE-AAC capable encoder will auto adjust the user-specified SAP/RAP
value to the allowed grid where SAP/RAPs can occur.
e.g.: `-rapInterval 5000` (5s) may result in actual SAPs/RAPs every
4.984s.
* To ensure SAP/RAP starts a new segment, Shaka needs to executed with a
"--segment_duration" is less than or equal to that adjusted value.
* If every SAP/RAP should trigger a new segment, just set the segment
length to a very low value e.g.: `--segment_duration 0.1`
Using the latest depot_tools no longer works. depot_tools also wants
to auto-update itself, which must now be disabled.
We also need to disable the copy of python (vpython) included in
depot_tools, since for some distros, it has dependencies on system
libraries that no longer exist.
Finally, we need to force some distros to use python 2, because our
build system is ancient and needs to be ripped out and replaced some
day soon.
This fixes build issues in our CI, our Dockerfiles, and in general on
certain platforms or distros.
Closes#1023
This is an automated sync of common workflows for this organization.
The upstream source is:
b39597e92d
Co-authored-by: Shaka Bot <shaka-bot@users.noreply.github.com>
This is an automated sync of common workflows for this organization.
The upstream source is:
9517df8f73
Co-authored-by: Shaka Bot <shaka-bot@users.noreply.github.com>
This is an automated sync of common workflows for this organization.
The upstream source is:
aa5e38eb86
Co-authored-by: Shaka Bot <shaka-bot@users.noreply.github.com>
The official, static-linked linux builds were crashing in their use of
getaddrinfo, which libcurl was configured to use. Both getaddrinfo
and all of its alternatives available in glibc fail with static
linking.
We can fix this by configuring libcurl to use libc-ares on Linux
instead. This allows us to keep the benefits of a statically-linked
Linux binary.
Closes#996
Change-Id: Ib4a9eb939813fd165727788726459ef4adf3fc4d
The script in packager/testing/dockers/test_dockers.sh now outputs
more useful info for debugging, uses unique container names per OS so
that the containers can be debugged, and allows filtering to re-run
specific OSes if a build fails.
Change-Id: I0cace282549c093a643009f5e60e7545a039168c
This updates the main Dockerfile and all the docker-based
distro-specific tests. The base OS versions have been updated to
versions that have not reached end-of-life status yet, and the list of
dependencies required has been updated and pruned.
Change-Id: Ibcff2f60e739fd5d999af100af76c40aa91a75bc
In many places, we used std::numeric_limits without including the
proper header. This would build on some Linux distributions, but not
others.
This adds the missing includes, fixing the build on Fedora, among
other distros.
Change-Id: I63e9e37e5973fe23bbdf9868552db51062b1dae4
In one of the low-latency changes, a change was made to HttpFile that
caused responses to HTTP POST requests to go missing. This resulted
in failures to fetch encryption keys.
The breaking change was recommended by me in a PR review, and was not
caught by any unit tests. New tests would be ideal, but I chose to
fix the bug first, rather than leave the repo broken.
This bug was brought to my attention in google/shaka-streamer#87 and
has not appeared in any release versions.
Change-Id: I9eca73d187a8a30f16c4a920fcdb7b4872253858
## The issue
- With LL-DASH mode enabled, the gap size warning was hit and printed to the console every time a new segment was registered to the manifest.
- This occurred because the first chunk's size and duration were being stored for each segment, rather than the full segment size and duration. Note, only the first chunk's metrics are known at first because in low latency mode, the segment is registered to the manifest before it is finished being processed and written.
- Because of this, the gap size check was comparing the end time of the first chunk in the previous segment to the beginning time of the current segment, causing the check to fail every time.
## The Fix
- Update a low latency segment's duration and size once the segment file has been fully written.
- The full segment size and duration will be used to update the bandwidth estimator and the segment info list.
- Updating the segment info list to hold the full duration is necessary for satisfying [the gap size check found in Represenation.cc](https://github.com/google/shaka-packager/blob/master/packager/mpd/base/representation.cc#L391).
- NOTE: bandwidth estimation is currently only used in HLS
It was suggested in code review for another project that we update the
runner labels for clarity. This brings Packager in line with that, so
that we are using the same labels across projects.
The runners have already been updated to register with the new label.
Change-Id: I30b22530225b5bd22b965ba98d276bcd74ade6cf
Now that we have multiple architectures, we should factor both OS and
architecture into the names of release binaries. This makes the names
more formulaic, as well as consistent with the static-ffmpeg-binaries
repository. Shaka Streamer will pull binaries from both this repo and
that one, so consistent names would be helpful.
The pssh-box release is actually OS and architecture independent, so
remove the suffix from that and only release one copy of it.
Change-Id: Ief3de49fae267c5267647a8dd4377023777ead37
The generate_version_string script was only producing correct results
in python 2, not python 3. The gyp file that references it explicitly
runs it in python3. The shebang line of the script has been updated
to match. The script itself has been updated such that it now works
correctly in both python2 and python3.
Scripts that are only used as modules (not executed directly) have had
their shebang lines removed.
This fixes CI failures on GitHub Actions.
Change-Id: I309bafd2fb05e8fb33f5e092ead179c8c42ea5d3
# LL-DASH Support
These changes add support for LL-DASH streaming.
**NOTE:** LL-HLS support is still in progress, but it's coming. :)
## Testing
`./chunking_unittest --gtest_filter="ChunkingHandlerTest.LowLatencyDash"`
`./media_event_unittest --gtest_filter="MpdNotifyMuxerListenerTest.LowLatencyDash"`
`./mpd_unittest --gtest_filter="PeriodTest.LowLatencyDashMpdGetXml"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifyAvailabilityTimeOffset"`
`./mpd_unittest --gtest_filter="SimpleMpdNotifierTest.NotifySegmentDuration"`
`./mpd_unittest --gtest_filter="LowLatencySegmentTest.LowLatencySegmentTemplate"`
Note, packager_test must be run from the main project directory
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
`./out/Release/packager_test --gtest_filter="PackagerTest.LowLatencyDashEnabledAndUtcTimingNotSet"`
The newest pylint release complained about several issues that the
older release did not. This resolves those issues:
- removes unneeded "u" prefix from strings
- adds "encoding" parameter for all open() calls
- because "encoding" is a python3-only parameter, use python3 in all
the scripts that we control
Unfortunately, python2 is required for any scripts that import modules
from the ancient Chromium build system we're using (referenced by
DEPS), as well as kokoro scripts.
Change-Id: I2e9f97af508efe58b5a71de21740e59b1528affd
This was causing failures on arm64, where the build action had an
arm-specific clause that was skipped due to the missing parameter.
Change-Id: I71b7fb15120855c444749dc2216b5f19f0561f6e
When using CC=clang CXX=clang++, there is a binutils version check
that does not work correctly in common.gypi. Since we are stuck with
a very old version of chromium/src/build, there is nothing to do but
patch it to remove the check. Thankfully, this version number does
not control anything critical in the build settings as far as we can
tell.
Change-Id: Id749d97c5898917592f66136538ee0fa5ca78767
We never produced static release executables on Linux before, but the dynamic libraries they depended on were universal enough that nobody noticed. Now that we have released v2.5 and switched to GitHub Actions for CI builds, the Linux executables depend on libatomic, which is causing issues for some users.
Although we can't create fully-static executables on macOS or Windows, we can at least do so on Linux.
This adds a GYP variable static_link_binaries which can be set to request full-static binaries on Linux. This also exposes the Chromium build variable disable_fatal_linker_warnings, which is necessary when static linking on Linux due to static-link-related warnings generated by libcurl for its use of getaddrinfo. Finally, this enforces the definition of __UCLIBC__ with static linking on Linux, which is the only way to disable malloc hooks in Chromium base. Those hooks cause linker failures when linking statically on Linux.
A new check has been added to the release workflow to ensure that the builds we create are statically linked on Linux.
Closes#965
There is not a good reason to use a long-lived token attached to
shaka-bot. Instead, use a short-lived, automatic token generated by
GitHub Actions for the workflow run.
This converts all time parameters to signed, finishing a cleanup that
was started in 2018 in b4256bf0. This changes the type of:
- timestamps
- PTS specifically
- timestamp offsets
- timescales
- durations
This excludes:
- MP4 box definitions
- DTS specifically
This is meant to address signed/unsigned conversion issues on arm64
that caused some test cases to fail.
Change-Id: Ic752a20cbc6e31fea6bc0894d1771833171e7cbe
This fixes the Debug build of libpng on arm64 by avoiding CPU-specific
optimizations that are not in our sources list. The Release build
appears to have been unaffected, possibly due to link-time
optimizations or dead code stripping.
Change-Id: I900e00fe30b9f3748f2587cfea89a636b3a19811
This brings our default build config more in line with what is
necessary for some platforms anyway: using the system-installed
toolchain and sysroot to build everything.
We will no longer fetch source or binaries for any specific build
tools, such as libc++, clang, gold, binutils, or valgrind.
The main part of this change is the changing of default gyp settings
in gyp_packager.py. For this, a bug in gyp_packager.py had to be
fixed, in which similar GYP_DEFINE key names (such as clang and
host_clang) would conflict, causing some defaults not to be installed
properly.
In order to enable clang=0 by default, some changes had to be made in
common.gypi:
- compiler macros added to fix a compatibility issue between
Chromium's base/mac/ folder and the actual OSX SDK
- replaced clang_warning_flags variables with standard cflags
settings, plus xcode_settings for OSX
- turned off warnings-as-errors for non-shaka code, rather than
allow-listing specific warning types, since we can't actually fix
those warnings on any platform
- disabled two specific warnings in shaka code, both of which are
caused by headers from our non-shaka dependencies
Also, one warning (missing "override" keyword) has been fixed in
vod_media_info_dump_muxer_listener.h.
Although these changes were done to make building simpler on a wider
array of platforms (arm64, for example), it seems to make the build a
bit faster, too. For me, at least, on my main Linux workstation:
- "gclient sync" now runs 20-30% faster
- "ninja -C out/Release" now runs 5-13% faster
The following environment variables are no longer required:
- DEPOT_TOOLS_WIN_TOOLCHAIN
- MACOSX_DEPLOYMENT_TARGET
Documentation, Dockerfiles, and GitHub Actions workflows have been
updated to reflect this.
The following GYP_DEFINES are no longer required for anyone:
- clang=0
- host_clang=0
- clang_xcode=1
- use_allocator=none
- use_experimental_allocator_shim=0
Documentation, Dockerfiles, and GitHub Actions workflows have been
updated to reflect this.
The following repos are no longer dependencies in gclient:
- binutils
- clang
- gold
- libc++
- libc++abi
- valgrind
The following gclient hooks have been removed:
- clang
- mac_toolchain
- sysroot
Change-Id: Ie94ccbeec722ab73c291cb7df897d20761a09a70
Internal CI systems and the new GitHub CI system were out of sync,
with the external system not doing any linting. Further, the internal
system was using an internal-only linter for Python.
This creates a script for Python linting based on the open-source
pylint tool, checks in the Google Style Guide's pylintrc file, creates
a custom action for linting and adds it to the existing workflows,
fixes pre-existing linter errors in Python scripts, and updates pylint
overrides.
b/190743862
Change-Id: Iff1f5d4690b32479af777ded0834c31c2161bd10
Instead of printing a binary object, treat the output of clang-format
as a utf-8 string.
b/190743862
Change-Id: I596d223792597f8157fdee2d75773131cc858c9a
Testing CI workflows is a pain. This usually involves forking the
main repo and testing various operations there, where the results will
not break the main repo.
However, some things like NPM and Docker package names were initially
hard-coded. This meant that a fork would need to customize those in
the workflows to avoid pushing official-looking packages during CI
testing.
This change moves those hard-coded names to GitHub Secrets. Though
the names are not actually secret, the secret store is per-repo, and
will be independent in a fork. This makes it easier to avoid
accidentally pushing official-looking releases during testing, even if
the fork has access to the same auth tokens.
Change-Id: Ide8f7aa92a028dd217200fca60881333bf8ae579
It turns out that workflows were the wrong way to abstract reusable
pieces of work. This turns common steps into custom actions (build
docs, build packager, test packager) which can be used as encapsulated
steps in multiple workflows.
This is a much more natural way to avoid duplication compared to the
previous approach of triggering one workflow from another. This also
has the benefit of all of the steps of a release being represented on
GitHub as a single workflow, making it easier to understand what is
happening and what event triggered those steps.
Change-Id: Ife156d60069a39594c7b3bb3bc32080e6453b544