feat: First phase of CMake build system implementation (#1072)

There are a lot of changes in this first phase, because there was a
lot of infrastructure required to get some meaningful amount of
porting done.  Future PRs should be simpler.

<b>Summary of changes:</b><details>

 - Remove old deps:
   - boringssl (replaced with mbedtls, lighter, easier to build)
   - gflags (replaced with absl::flags)
   - Chromium build tools
 - New deps to replace parts of Chromium base:
   - abseil-cpp
   - glog
   - nlohmann::json (for tests only)
 - Submodules, updates, and CMake build rules for third-party
   libraries:
   - curl
   - gmock/gtest
 - Ported internal libraries and their tests by removing Chromium deps
   and adding CMake build rules:
   - file (now using C++17 filesystem APIs)
   - license_notice
   - status
   - version
 - Test improvements
   - Removed file tests that can never be re-enabled
   - Re-enabled all other disabled file tests
   - Debug JSON values when HTTP tests fail
   - Fixed chunked-encoding issues in HTTP tests
 - Updated and refactored Dockerfiles testing
   - All docker files working, with OS versions updated to meet the
     new tool requirements
   - Local docker builds no longer write files to your working
     directory as root
   - Local docker builds can now be run in parallel without clobbering
     each others' build outputs
   - DEBUG=1 can drop you into an interactive shell when a docker
     build fails
 - Updated and heavily refactored workflows and Dockerfiles
   - All docker files now tested in parallel on GitHub, speeding up CI
   - All common workflow components broken out and using workflow_call
     instead of custom actions
   - Self-hosted runners now optional, to make testing easier on forks
   - CMake porting works-in-process can now be fully tested on GitHub
   - Building ported libraries and passing ported tests on all three
     platforms!
 - CI hacks for macOS removed, now testing on macos-latest!
 - Python2 no longer required!  (Only Python3)
 - Using strict build flags, treating all warnings as errors.

</details>

<b>Required to build:</b>

 - CMake >= 3.16
 - Python 3
 - A compiler supporting C++ >= 17
   - g++ >= 9 if using GCC (Clang also fine)
   - MSVC for Windows

<b>Still needs work:</b><details>

 - Moving other dependencies into submodules (if we keep them):
   - apple_apsl
   - icu
   - libevent
   - libpng
   - libwebm
   - libxml
   - modp_b64
   - protobuf
   - zlib
 - Port remaining internal libraries:
   - app
   - hls
   - media/base
   - media/chunking
   - media/codecs
   - media/crypto
   - media/demuxer
   - media/event
   - media/formats/dvb
   - media/formats/mp2t
   - media/formats/mp4
   - media/formats/packed_audio
   - media/formats/ttml
   - media/formats/webm
   - media/formats/webvtt
   - media/formats/wvm
   - media/origin
   - media/public
   - media/replicator
   - media/trick_play
   - mpd
 - Port main application
   - Add logging flags in absl and connect them to glog (which expects
     gflags)
 - Port pssh-box.py
 - Port main test targets (packager_test.py and packager_app.py)
 - Updating all requirement and build documentation
 - Remove any remaining refs to gclient, depot_tools, ninja
 - Update and complete release workflows using release-please
</details>

Issue #346 (Switch to abseil)
Issue #1047 (New build system)
This commit is contained in:
Joey Parrish 2022-08-16 11:34:51 -07:00 committed by GitHub
parent 31129eed64
commit e6b57f72a8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
338 changed files with 2184 additions and 314873 deletions

View File

@ -1,37 +1,49 @@
# GitHub Actions CI
## Actions
- `custom-actions/lint-packager`:
Lints Shaka Packager. You must pass `fetch-depth: 2` to `actions/checkout`
in order to provide enough history for the linter to tell which files have
changed.
- `custom-actions/build-packager`:
Builds Shaka Packager. Leaves build artifacts in the "artifacts" folder.
Requires OS-dependent and build-dependent inputs.
- `custom-actions/test-packager`:
Tests Shaka Packager. Requires OS-dependent and build-dependent inputs.
- `custom-actions/build-docs`:
Builds Shaka Packager docs.
## Reusable workflows
- `build.yaml`:
Build and test all combinations of OS & build settings. Also builds docs on
Linux.
- `build-docs.yaml`:
Build Packager docs. Runs only on Linux.
- `docker-image.yaml`:
Build the official Docker image.
- `lint.yaml`:
Lint Shaka Packager.
- `test-linux-distros.yaml`:
Test the build on all Linux distros via docker.
## Composed workflows
- On PR (`pr.yaml`), invoke:
- `lint.yaml`
- `build.yaml`
- `build-docs.yaml`
- `docker-image.yaml`
- `test-linux-distros.yaml`
- On release tag (`github-release.yaml`):
- Create a draft release
- Invoke:
- `lint.yaml`
- `build.yaml`
- `test-linux-distros.yaml`
- Publish the release with binaries from `build.yaml` attached
## Workflows
- On PR:
- `build_and_test.yaml`:
Builds and tests all combinations of OS & build settings. Also builds
docs.
- On release tag:
- `github_release.yaml`:
Creates a draft release on GitHub, builds and tests all combinations of OS
& build settings, builds docs on all OSes, attaches static release binaries
to the draft release, then fully publishes the release.
- On release published:
- `docker_hub_release.yaml`:
Builds a Docker image to match the published GitHub release, then pushes it
to Docker Hub.
- `npm_release.yaml`:
Builds an NPM package to match the published GitHub release, then pushes it
to NPM.
- `update_docs.yaml`:
Builds updated docs and pushes them to the gh-pages branch.
- `docker-hub-release.yaml`, publishes the official Docker image
- `npm-release.yaml`, publishes the official NPM package
- `update-docs.yaml`:
- Invoke `build-docs.yaml`
- Push the output to the `gh-pages` branch
## Common workflows from shaka-project
- `sync-labels.yaml`
- `update-issues.yaml`
- `validate-pr-title.yaml`
## Required Repo Secrets
- `DOCKERHUB_CI_USERNAME`: The username of the Docker Hub CI account
@ -47,3 +59,8 @@
- `NPM_PACKAGE_NAME`: Not a true "secret", but stored here to avoid someone
pushing bogus packages to NPM during CI testing from a fork
- In a fork, set to a private name which differs from the production one
## Optional Repo Secrets
- `ENABLE_DEBUG`: Set to non-empty to enable debugging via SSH after a failure
- `ENABLE_SELF_HOSTED`: Set to non-empty to enable self-hosted runners in the
build matrix

63
.github/workflows/build-docs.yaml vendored Normal file
View File

@ -0,0 +1,63 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A reusable workflow to build Packager docs. Leaves docs output in the
# "gh-pages" folder. Only runs in Linux due to the dependency on doxygen,
# which we install with apt.
name: Build Docs
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
jobs:
docs:
name: Build docs
runs-on: ubuntu-latest
steps:
- name: Install dependencies
run: |
sudo apt install -y doxygen
python3 -m pip install \
sphinxcontrib.plantuml \
recommonmark \
cloud_sptheme \
breathe
- name: Checkout code
uses: actions/checkout@v2
with:
ref: ${{ inputs.ref }}
- name: Generate docs
run: |
mkdir -p gh-pages
mkdir -p build
# Doxygen must run before Sphinx. Sphinx will refer to
# Doxygen-generated output when it builds its own docs.
doxygen docs/Doxyfile
# Now build the Sphinx-based docs.
make -C docs/ html
# Now move the generated outputs.
cp -a build/sphinx/html gh-pages/html
cp -a build/doxygen/html gh-pages/docs
cp docs/index.html gh-pages/index.html

33
.github/workflows/build-matrix.json vendored Normal file
View File

@ -0,0 +1,33 @@
{
"comment1": "runners hosted by GitHub, always enabled",
"hosted": [
{
"os": "ubuntu-latest",
"os_name": "linux",
"target_arch": "x64",
"exe_ext": ""
},
{
"os": "macos-latest",
"os_name": "osx",
"target_arch": "x64",
"exe_ext": ""
},
{
"os": "windows-latest",
"os_name": "win",
"target_arch": "x64",
"exe_ext": ".exe"
}
],
"comment2": "runners hosted by the owner, enabled by the ENABLE_SELF_HOSTED secret being set on the repo",
"selfHosted": [
{
"os": "self-hosted-linux-arm64",
"os_name": "linux",
"target_arch": "arm64",
"exe_ext": ""
}
]
}

209
.github/workflows/build.yaml vendored Normal file
View File

@ -0,0 +1,209 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A reusable workflow to build and test Packager on every supported OS and
# architecture.
name: Build
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
secrets:
# The GITHUB_TOKEN name is reserved, but not passed through implicitly.
# So we call our secret parameter simply TOKEN.
TOKEN:
required: false
# These below are not actual secrets, but secrets are the only place to
# keep repo-specific configs that make this project friendlier to forks
# and easier to debug.
# If non-empty, start a debug SSH server on failures.
ENABLE_DEBUG:
required: false
# If non-empty, enable self-hosted runners in the build matrix.
ENABLE_SELF_HOSTED:
required: false
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
# Configure the build matrix based on inputs. The list of objects in the
# build matrix contents can't be changed by conditionals, but it can be
# computed by another job and deserialized. This uses
# secrets.ENABLE_SELF_HOSTED to determine the build matrix, based on the
# metadata in build-matrix.json.
matrix_config:
runs-on: ubuntu-latest
outputs:
INCLUDE: ${{ steps.configure.outputs.INCLUDE }}
OS: ${{ steps.configure.outputs.OS }}
ENABLE_DEBUG: ${{ steps.configure.outputs.ENABLE_DEBUG }}
steps:
- uses: actions/checkout@v2
with:
ref: ${{ inputs.ref }}
- name: Configure Build Matrix
id: configure
shell: node {0}
run: |
const enableDebug = "${{ secrets.ENABLE_DEBUG }}" != '';
const enableSelfHosted = "${{ secrets.ENABLE_SELF_HOSTED }}" != '';
// Use ENABLE_SELF_HOSTED to decide what the build matrix below
// should include.
const {hosted, selfHosted} = require("${{ github.workspace }}/.github/workflows/build-matrix.json");
const include = enableSelfHosted ? hosted.concat(selfHosted) : hosted;
const os = include.map((config) => config.os);
// Output JSON objects consumed by the build matrix below.
console.log(`::set-output name=INCLUDE::${ JSON.stringify(include) }`);
console.log(`::set-output name=OS::${ JSON.stringify(os) }`);
// Output the debug flag directly.
console.log(`::set-output name=ENABLE_DEBUG::${ enableDebug }`);
// Log the outputs, for the sake of debugging this script.
console.log({enableDebug, include, os});
build:
needs: matrix_config
strategy:
fail-fast: false
matrix:
include: ${{ fromJSON(needs.matrix_config.outputs.INCLUDE) }}
os: ${{ fromJSON(needs.matrix_config.outputs.OS) }}
build_type: ["Debug", "Release"]
lib_type: ["static", "shared"]
name: ${{ matrix.os_name }} ${{ matrix.target_arch }} ${{ matrix.build_type }} ${{ matrix.lib_type }}
runs-on: ${{ matrix.os }}
steps:
- name: Configure git to preserve line endings
# Otherwise, tests fail on Windows because "golden" test outputs will not
# have the correct line endings.
run: git config --global core.autocrlf false
- name: Checkout code
uses: actions/checkout@v2
with:
ref: ${{ inputs.ref }}
submodules: true
- name: Install Linux deps
if: runner.os == 'Linux'
run: sudo apt install -y libc-ares-dev
- name: Generate build files
run: |
mkdir -p build/
if [[ "${{ matrix.lib_type }}" == "shared" ]]; then
LIBPACKAGER_SHARED="ON"
else
LIBPACKAGER_SHARED="OFF"
fi
cmake \
-DCMAKE_BUILD_TYPE="${{ matrix.build_type }}" \
-DLIBPACKAGER_SHARED="$LIBPACKAGER_SHARED" \
-S . \
-B build/
- name: Build
# This is a universal build command, which will call make on Linux and
# Visual Studio on Windows. Note that the VS generator is what cmake
# calls a "multi-configuration" generator, and so the desired build
# type must be specified for Windows.
run: cmake --build build/ --config "${{ matrix.build_type }}"
- name: Test
run: cd build; ctest -C "${{ matrix.build_type }}" -V
# TODO(joeyparrish): Prepare artifacts when build system is complete again
# - name: Prepare artifacts (static release only)
# run: |
# BUILD_CONFIG="${{ matrix.build_type }}-${{ matrix.lib_type }}"
# if [[ "$BUILD_CONFIG" != "Release-static" ]]; then
# echo "Skipping artifacts for $BUILD_CONFIG."
# exit 0
# fi
# if [[ "${{ runner.os }}" == "Linux" ]]; then
# echo "::group::Check for static executables"
# (
# cd build/Release
# # Prove that we built static executables on Linux. First, check that
# # the executables exist, and fail if they do not. Then check "ldd",
# # which will fail if the executable is not dynamically linked. If
# # "ldd" succeeds, we fail the workflow. Finally, we call "true" so
# # that the last executed statement will be a success, and the step
# # won't be failed if we get that far.
# ls packager mpd_generator >/dev/null || exit 1
# ldd packager 2>&1 && exit 1
# ldd mpd_generator 2>&1 && exit 1
# true
# )
# echo "::endgroup::"
# fi
# echo "::group::Prepare artifacts folder"
# mkdir artifacts
# ARTIFACTS="$GITHUB_WORKSPACE/artifacts"
# cd build/Release
# echo "::endgroup::"
# echo "::group::Strip executables"
# strip packager${{ matrix.exe_ext }}
# strip mpd_generator${{ matrix.exe_ext }}
# echo "::endgroup::"
# SUFFIX="-${{ matrix.os_name }}-${{ matrix.target_arch }}"
# EXE_SUFFIX="$SUFFIX${{ matrix.exe_ext}}"
# echo "::group::Copy packager"
# cp packager${{ matrix.exe_ext }} $ARTIFACTS/packager$EXE_SUFFIX
# echo "::endgroup::"
# echo "::group::Copy mpd_generator"
# cp mpd_generator${{ matrix.exe_ext }} $ARTIFACTS/mpd_generator$EXE_SUFFIX
# echo "::endgroup::"
# # The pssh-box bundle is OS and architecture independent. So only do
# # it on this one OS and architecture, and give it a more generic
# # filename.
# if [[ '${{ matrix.os_name }}' == 'linux' && '${{ matrix.target_arch }}' == 'x64' ]]; then
# echo "::group::Tar pssh-box"
# tar -czf $ARTIFACTS/pssh-box.py.tar.gz pyproto pssh-box.py
# echo "::endgroup::"
# fi
# TODO(joeyparrish): Attach artifacts when build system is complete again
# - name: Attach artifacts to release
# if: matrix.build_type == 'Release' && matrix.lib_type == 'static'
# uses: dwenegar/upload-release-assets@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with:
# release_id: ${{ needs.draft_release.outputs.release_id }}
# assets_path: artifacts
- name: Debug
uses: mxschmitt/action-tmate@v3.6
with:
limit-access-to-actor: true
if: failure() && needs.matrix_config.outputs.ENABLE_DEBUG != ''

View File

@ -1,137 +0,0 @@
name: Build and Test PR
# Builds and tests on all combinations of OS, build type, and library type.
# Also builds the docs.
#
# Runs when a pull request is opened or updated.
#
# Can also be run manually for debugging purposes.
on:
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
inputs:
ref:
description: "The ref to build and test."
required: False
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
# This makes the merge base available for the C++ linter, so that it
# can tell which files have changed.
fetch-depth: 2
- name: Lint
uses: ./src/.github/workflows/custom-actions/lint-packager
build_and_test:
# Doesn't really "need" it, but let's not waste time on an expensive matrix
# build step just to cancel it because of a linter error.
needs: lint
strategy:
fail-fast: false
matrix:
# NOTE: macos-10.15 is required for now, to work around issues with our
# build system. See related comments in
# .github/workflows/custom-actions/build-packager/action.yaml
os: ["ubuntu-latest", "macos-10.15", "windows-latest", "self-hosted-linux-arm64"]
build_type: ["Debug", "Release"]
lib_type: ["static", "shared"]
include:
- os: ubuntu-latest
os_name: linux
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: macos-10.15
os_name: osx
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: windows-latest
os_name: win
target_arch: x64
exe_ext: ".exe"
# 64-bit outputs on Windows go to a different folder name.
build_type_suffix: "_x64"
- os: self-hosted-linux-arm64
os_name: linux
target_arch: arm64
exe_ext: ""
build_type_suffix: ""
name: Build and test ${{ matrix.os_name }} ${{ matrix.target_arch }} ${{ matrix.build_type }} ${{ matrix.lib_type }}
runs-on: ${{ matrix.os }}
steps:
- name: Configure git to preserve line endings
# Otherwise, tests fail on Windows because "golden" test outputs will not
# have the correct line endings.
run: git config --global core.autocrlf false
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
- name: Build docs (Linux only)
if: runner.os == 'Linux'
uses: ./src/.github/workflows/custom-actions/build-docs
- name: Build Packager
uses: ./src/.github/workflows/custom-actions/build-packager
with:
os_name: ${{ matrix.os_name }}
target_arch: ${{ matrix.target_arch }}
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
- name: Test Packager
uses: ./src/.github/workflows/custom-actions/test-packager
with:
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
test_supported_linux_distros:
# Doesn't really "need" it, but let's not waste time on a series of docker
# builds just to cancel it because of a linter error.
needs: lint
name: Test builds on all supported Linux distros (using docker)
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
- name: Install depot tools
shell: bash
run: |
git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
touch depot_tools/.disable_auto_update
echo "${GITHUB_WORKSPACE}/depot_tools" >> $GITHUB_PATH
- name: Setup gclient
shell: bash
run: |
gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
# NOTE: the docker tests will do gclient runhooks, so skip hooks here.
gclient sync --nohooks
- name: Test all distros
shell: bash
run: ./src/packager/testing/dockers/test_dockers.sh

View File

@ -1,46 +0,0 @@
name: Build Shaka Packager Docs
description: |
A reusable action to build Shaka Packager docs.
Leaves docs output in the "gh-pages" folder.
Only runs in Linux due to the dependency on doxygen, which we install with
apt.
runs:
using: composite
steps:
- name: Install dependencies
shell: bash
run: |
echo "::group::Install dependencies"
sudo apt install -y doxygen
python3 -m pip install \
sphinxcontrib.plantuml \
recommonmark \
cloud_sptheme \
breathe
echo "::endgroup::"
- name: Generate docs
shell: bash
run: |
echo "::group::Prepare output folders"
mkdir -p gh-pages
cd src
mkdir -p out
echo "::endgroup::"
echo "::group::Build Doxygen docs"
# Doxygen must run before Sphinx. Sphinx will refer to
# Doxygen-generated output when it builds its own docs.
doxygen docs/Doxyfile
echo "::endgroup::"
echo "::group::Build Sphinx docs"
# Now build the Sphinx-based docs.
make -C docs/ html
echo "::endgroup::"
echo "::group::Move ouputs"
# Now move the generated outputs.
cp -a out/sphinx/html ../gh-pages/html
cp -a out/doxygen/html ../gh-pages/docs
cp docs/index.html ../gh-pages/index.html
echo "::endgroup::"

View File

@ -1,182 +0,0 @@
name: Build Shaka Packager
description: |
A reusable action to build Shaka Packager.
Leaves build artifacts in the "artifacts" folder.
inputs:
os_name:
description: The name of the OS (one word). Appended to artifact filenames.
required: true
target_arch:
description: The CPU architecture to target. We support x64, arm64.
required: true
lib_type:
description: A library type, either "static" or "shared".
required: true
build_type:
description: A build type, either "Debug" or "Release".
required: true
build_type_suffix:
description: A suffix to append to the build type in the output path.
required: false
default: ""
exe_ext:
description: The extension on executable files.
required: false
default: ""
runs:
using: composite
steps:
- name: Select Xcode 10.3 and SDK 10.14 (macOS only)
# NOTE: macOS 11 doesn't work with our (old) version of Chromium build,
# and the latest Chromium build doesn't work with Packager's build
# system. To work around this, we need an older SDK version, and to
# get that, we need an older XCode version. XCode 10.3 has SDK 10.14,
# which works.
shell: bash
run: |
if [[ "${{ runner.os }}" == "macOS" ]]; then
echo "::group::Select Xcode 10.3"
sudo xcode-select -s /Applications/Xcode_10.3.app/Contents/Developer
echo "::endgroup::"
fi
- name: Install c-ares (Linux only)
shell: bash
run: |
if [[ "${{ runner.os }}" == "Linux" ]]; then
echo "::group::Install c-ares"
sudo apt install -y libc-ares-dev
echo "::endgroup::"
fi
- name: Force Python 2 to support ancient build system (non-Linux only)
if: runner.os != 'Linux'
uses: actions/setup-python@v2
with:
python-version: '2.x'
- name: Force Python 2 to support ancient build system (Linux only)
if: runner.os == 'Linux'
shell: bash
run: |
echo "::group::Install python2"
sudo apt install -y python2
sudo ln -sf python2 /usr/bin/python
echo "::endgroup::"
- name: Install depot tools
shell: bash
run: |
echo "::group::Install depot_tools"
git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
touch depot_tools/.disable_auto_update
echo "${GITHUB_WORKSPACE}/depot_tools" >> $GITHUB_PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
echo "VPYTHON_BYPASS=manually managed python not supported by chrome operations" >> $GITHUB_ENV
echo "::endgroup::"
- name: Build ninja (arm only)
shell: bash
run: |
# NOTE: There is no prebuilt copy of ninja for the "aarch64"
# architecture (as reported by "uname -p" on arm64). So we must build
# our own, as recommended by depot_tools when it fails to fetch a
# prebuilt copy for us.
# NOTE 2: It turns out that $GITHUB_PATH operates like a stack.
# Appending to that file places the new path at the beginning of $PATH
# for the next step, so this step must come _after_ installing
# depot_tools.
if [[ "${{ inputs.target_arch }}" == "arm64" ]]; then
echo "::group::Build ninja (arm-only)"
git clone https://github.com/ninja-build/ninja.git -b v1.8.2
# The --bootstrap option compiles ninja as well as configures it.
# This is the exact command prescribed by depot_tools when it fails to
# fetch a ninja binary for your platform.
(cd ninja && ./configure.py --bootstrap)
echo "${GITHUB_WORKSPACE}/ninja" >> $GITHUB_PATH
echo "::endgroup::"
fi
- name: Configure gclient
shell: bash
run: |
echo "::group::Configure gclient"
gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
echo "::endgroup::"
- name: Sync gclient
env:
GYP_DEFINES: "target_arch=${{ inputs.target_arch }} libpackager_type=${{ inputs.lib_type }}_library"
GYP_MSVS_VERSION: "2019"
GYP_MSVS_OVERRIDE_PATH: "C:/Program Files (x86)/Microsoft Visual Studio/2019/Enterprise"
shell: bash
run: |
echo "::group::Sync gclient"
BUILD_CONFIG="${{ inputs.build_type }}-${{ inputs.lib_type }}"
if [[ "$BUILD_CONFIG" == "Release-static" && "${{ runner.os }}" == "Linux" ]]; then
# For static release builds, set these two additional flags for fully static binaries.
export GYP_DEFINES="$GYP_DEFINES disable_fatal_linker_warnings=1 static_link_binaries=1"
fi
gclient sync
echo "::endgroup::"
- name: Build
shell: bash
run: |
echo "::group::Build"
ninja -C src/out/${{ inputs.build_type }}${{ inputs.build_type_suffix }}
echo "::endgroup::"
- name: Prepare artifacts (static release only)
shell: bash
run: |
BUILD_CONFIG="${{ inputs.build_type }}-${{ inputs.lib_type }}"
if [[ "$BUILD_CONFIG" != "Release-static" ]]; then
echo "Skipping artifacts for $BUILD_CONFIG."
exit 0
fi
if [[ "${{ runner.os }}" == "Linux" ]]; then
echo "::group::Check for static executables"
(
cd src/out/Release${{ inputs.build_type_suffix }}
# Prove that we built static executables on Linux. First, check that
# the executables exist, and fail if they do not. Then check "ldd",
# which will fail if the executable is not dynamically linked. If
# "ldd" succeeds, we fail the workflow. Finally, we call "true" so
# that the last executed statement will be a success, and the step
# won't be failed if we get that far.
ls packager mpd_generator >/dev/null || exit 1
ldd packager 2>&1 && exit 1
ldd mpd_generator 2>&1 && exit 1
true
)
echo "::endgroup::"
fi
echo "::group::Prepare artifacts folder"
mkdir artifacts
ARTIFACTS="$GITHUB_WORKSPACE/artifacts"
cd src/out/Release${{ inputs.build_type_suffix }}
echo "::endgroup::"
echo "::group::Strip executables"
strip packager${{ inputs.exe_ext }}
strip mpd_generator${{ inputs.exe_ext }}
echo "::endgroup::"
SUFFIX="-${{ inputs.os_name }}-${{ inputs.target_arch }}"
EXE_SUFFIX="$SUFFIX${{ inputs.exe_ext}}"
echo "::group::Copy packager"
cp packager${{ inputs.exe_ext }} $ARTIFACTS/packager$EXE_SUFFIX
echo "::endgroup::"
echo "::group::Copy mpd_generator"
cp mpd_generator${{ inputs.exe_ext }} $ARTIFACTS/mpd_generator$EXE_SUFFIX
echo "::endgroup::"
# The pssh-box bundle is OS and architecture independent. So only do
# it on this one OS and architecture, and give it a more generic
# filename.
if [[ '${{ inputs.os_name }}' == 'linux' && '${{ inputs.target_arch }}' == 'x64' ]]; then
echo "::group::Tar pssh-box"
tar -czf $ARTIFACTS/pssh-box.py.tar.gz pyproto pssh-box.py
echo "::endgroup::"
fi

View File

@ -1,36 +0,0 @@
name: Lint Shaka Packager
description: |
A reusable action to lint Shaka Packager source.
When checking out source, you must use 'fetch-depth: 2' in actions/checkout,
or else the linter won't have another revision to compare to.
runs:
using: composite
steps:
- name: Lint
shell: bash
run: |
cd src/
echo "::group::Installing git-clang-format"
wget https://raw.githubusercontent.com/llvm-mirror/clang/master/tools/clang-format/git-clang-format
sudo install -m 755 git-clang-format /usr/local/bin/git-clang-format
rm git-clang-format
echo "::endgroup::"
echo "::group::Installing pylint"
python3 -m pip install --upgrade pylint==2.8.3
echo "::endgroup::"
echo "::group::Check clang-format for C++ sources"
# NOTE: --binary forces use of global clang-format (which works) instead
# of depot_tools clang-format (which doesn't).
# NOTE: Must use base.sha instead of base.ref, since we don't have
# access to the branch name that base.ref would give us.
# NOTE: Must also use fetch-depth: 2 in actions/checkout to have access
# to the base ref for comparison.
packager/tools/git/check_formatting.py \
--binary /usr/bin/clang-format \
${{ github.event.pull_request.base.sha || 'HEAD^' }}
echo "::endgroup::"
echo "::group::Check pylint for Python sources"
packager/tools/git/check_pylint.py
echo "::endgroup::"

View File

@ -1,45 +0,0 @@
name: Test Shaka Packager
description: |
A reusable action to test Shaka Packager.
Should be run after building Shaka Packager.
inputs:
lib_type:
description: A library type, either "static" or "shared".
required: true
build_type:
description: A build type, either "Debug" or "Release".
required: true
build_type_suffix:
description: A suffix to append to the build type in the output path.
required: false
default: ""
exe_ext:
description: The extension on executable files.
required: false
default: ""
runs:
using: composite
steps:
- name: Test
shell: bash
run: |
echo "::group::Prepare test environment"
# NOTE: Some of these tests must be run from the "src" directory.
cd src/
OUTDIR=out/${{ inputs.build_type }}${{ inputs.build_type_suffix }}
if [[ '${{ runner.os }}' == 'macOS' ]]; then
export DYLD_FALLBACK_LIBRARY_PATH=$OUTDIR
fi
echo "::endgroup::"
for i in $OUTDIR/*test${{ inputs.exe_ext }}; do
echo "::group::Test $i"
"$i" || exit 1
echo "::endgroup::"
done
echo "::group::Test $OUTDIR/packager_test.py"
python3 $OUTDIR/packager_test.py \
-v --libpackager_type=${{ inputs.lib_type }}_library
echo "::endgroup::"

View File

@ -1,3 +1,17 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Docker Hub Release
# Runs when a new release is published on GitHub.
@ -30,8 +44,8 @@ jobs:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ env.TARGET_REF }}
submodules: true
- name: Log in to Docker Hub
uses: docker/login-action@v1
@ -43,5 +57,4 @@ jobs:
uses: docker/build-push-action@v2
with:
push: true
context: src/
tags: ${{ secrets.DOCKERHUB_PACKAGE_NAME }}:latest,${{ secrets.DOCKERHUB_PACKAGE_NAME }}:${{ env.TARGET_REF }}

45
.github/workflows/docker-image.yaml vendored Normal file
View File

@ -0,0 +1,45 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A workflow to build the official docker image.
name: Official Docker image
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
official_docker_image:
name: Build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
ref: ${{ github.event.inputs.ref || github.ref }}
submodules: true
- name: Build
shell: bash
run: docker build .

141
.github/workflows/github-release.yaml vendored Normal file
View File

@ -0,0 +1,141 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: GitHub Release
# Runs when a new tag is created that looks like a version number.
#
# 1. Creates a draft release on GitHub with the latest release notes
# 2. On all combinations of OS, build type, and library type:
# a. builds Packager
# b. builds the docs
# c. runs all tests
# d. attaches build artifacts to the release
# 3. Fully publishes the release on GitHub
#
# Publishing the release then triggers additional workflows for NPM, Docker
# Hub, and GitHub Pages.
#
# Can also be run manually for debugging purposes.
on:
push:
tags:
- "v*.*"
# For manual debugging:
workflow_dispatch:
inputs:
tag:
description: "An existing tag to release."
required: True
jobs:
# TODO(joeyparrish): Switch to release-please
setup:
name: Setup
runs-on: ubuntu-latest
outputs:
tag: ${{ steps.compute_tag.outputs.tag }}
steps:
- name: Compute tag
id: compute_tag
# We could be building from a workflow dispatch (manual run)
# or from a pushed tag. If triggered from a pushed tag, we would like
# to strip refs/tags/ off of the incoming ref and just use the tag
# name. Subsequent jobs can refer to the "tag" output of this job to
# determine the correct tag name in all cases.
run: |
# Strip refs/tags/ from the input to get the tag name, then store
# that in output.
echo "::set-output name=tag::${{ github.event.inputs.tag || github.ref }}" \
| sed -e 's@refs/tags/@@'
# TODO(joeyparrish): Switch to release-please
draft_release:
name: Create GitHub release
needs: setup
runs-on: ubuntu-latest
outputs:
release_id: ${{ steps.draft_release.outputs.id }}
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
ref: ${{ needs.setup.outputs.tag }}
- name: Check changelog version
# This check prevents releases without appropriate changelog updates.
run: |
VERSION=$(packager/tools/extract_from_changelog.py --version)
if [[ "$VERSION" != "${{ needs.setup.outputs.tag }}" ]]; then
echo ""
echo ""
echo "***** ***** *****"
echo ""
echo "Version mismatch!"
echo "Workflow is targetting ${{ needs.setup.outputs.tag }},"
echo "but CHANGELOG.md contains $VERSION!"
exit 1
fi
- name: Extract release notes
run: |
packager/tools/extract_from_changelog.py --release_notes \
| tee ../RELEASE_NOTES.md
- name: Draft release
id: draft_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ needs.setup.outputs.tag }}
release_name: ${{ needs.setup.outputs.tag }}
body_path: RELEASE_NOTES.md
draft: true
lint:
needs: setup
name: Lint
uses: ./.github/workflows/lint.yaml
with:
ref: ${{ needs.setup.outputs.tag }}
build_and_test:
needs: [setup, lint, draft_release]
name: Build and test
uses: ./.github/workflows/build.yaml
with:
ref: ${{ needs.setup.outputs.tag }}
test_supported_linux_distros:
# Doesn't really "need" it, but let's not waste time on a series of docker
# builds just to cancel it because of a linter error.
needs: lint
name: Test Linux distros
uses: ./.github/workflows/test-linux-distros.yaml
with:
ref: ${{ needs.setup.outputs.tag }}
# TODO(joeyparrish): Switch to release-please
publish_release:
name: Publish GitHub release
needs: [draft_release, build_and_test]
runs-on: ubuntu-latest
steps:
- name: Publish release
uses: eregon/publish-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
release_id: ${{ needs.draft_release.outputs.release_id }}

View File

@ -1,228 +0,0 @@
name: GitHub Release
# Runs when a new tag is created that looks like a version number.
#
# 1. Creates a draft release on GitHub with the latest release notes
# 2. On all combinations of OS, build type, and library type:
# a. builds Packager
# b. builds the docs
# c. runs all tests
# d. attaches build artifacts to the release
# 3. Fully publishes the release on GitHub
#
# Publishing the release then triggers additional workflows for NPM, Docker
# Hub, and GitHub Pages.
#
# Can also be run manually for debugging purposes.
on:
push:
tags:
- "v*.*"
# For manual debugging:
workflow_dispatch:
inputs:
tag:
description: "An existing tag to release."
required: True
jobs:
setup:
name: Setup
runs-on: ubuntu-latest
outputs:
tag: ${{ steps.compute_tag.outputs.tag }}
steps:
- name: Compute tag
id: compute_tag
# We could be building from a workflow dispatch (manual run)
# or from a pushed tag. If triggered from a pushed tag, we would like
# to strip refs/tags/ off of the incoming ref and just use the tag
# name. Subsequent jobs can refer to the "tag" output of this job to
# determine the correct tag name in all cases.
run: |
# Strip refs/tags/ from the input to get the tag name, then store
# that in output.
echo "::set-output name=tag::${{ github.event.inputs.tag || github.ref }}" \
| sed -e 's@refs/tags/@@'
draft_release:
name: Create GitHub release
needs: setup
runs-on: ubuntu-latest
outputs:
release_id: ${{ steps.draft_release.outputs.id }}
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ needs.setup.outputs.tag }}
- name: Check changelog version
# This check prevents releases without appropriate changelog updates.
run: |
cd src
VERSION=$(packager/tools/extract_from_changelog.py --version)
if [[ "$VERSION" != "${{ needs.setup.outputs.tag }}" ]]; then
echo ""
echo ""
echo "***** ***** *****"
echo ""
echo "Version mismatch!"
echo "Workflow is targetting ${{ needs.setup.outputs.tag }},"
echo "but CHANGELOG.md contains $VERSION!"
exit 1
fi
- name: Extract release notes
run: |
cd src
packager/tools/extract_from_changelog.py --release_notes \
| tee ../RELEASE_NOTES.md
- name: Draft release
id: draft_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ needs.setup.outputs.tag }}
release_name: ${{ needs.setup.outputs.tag }}
body_path: RELEASE_NOTES.md
draft: true
lint:
needs: setup
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ needs.setup.outputs.tag }}
# This makes the merge base available for the C++ linter, so that it
# can tell which files have changed.
fetch-depth: 2
- name: Lint
uses: ./src/.github/workflows/custom-actions/lint-packager
build_and_test:
needs: [setup, lint, draft_release]
strategy:
matrix:
os: ["ubuntu-latest", "macos-latest", "windows-latest", "self-hosted-linux-arm64"]
build_type: ["Debug", "Release"]
lib_type: ["static", "shared"]
include:
- os: ubuntu-latest
os_name: linux
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: macos-latest
os_name: osx
target_arch: x64
exe_ext: ""
build_type_suffix: ""
- os: windows-latest
os_name: win
target_arch: x64
exe_ext: ".exe"
# 64-bit outputs on Windows go to a different folder name.
build_type_suffix: "_x64"
- os: self-hosted-linux-arm64
os_name: linux
target_arch: arm64
exe_ext: ""
build_type_suffix: ""
name: Build and test ${{ matrix.os_name }} ${{ matrix.target_arch }} ${{ matrix.build_type }} ${{ matrix.lib_type }}
runs-on: ${{ matrix.os }}
steps:
- name: Configure git to preserve line endings
# Otherwise, tests fail on Windows because "golden" test outputs will not
# have the correct line endings.
run: git config --global core.autocrlf false
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ needs.setup.outputs.tag }}
- name: Build docs (Linux only)
if: runner.os == 'Linux'
uses: ./src/.github/workflows/custom-actions/build-docs
- name: Build Packager
uses: ./src/.github/workflows/custom-actions/build-packager
with:
os_name: ${{ matrix.os_name }}
target_arch: ${{ matrix.target_arch }}
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
- name: Test Packager
uses: ./src/.github/workflows/custom-actions/test-packager
with:
lib_type: ${{ matrix.lib_type }}
build_type: ${{ matrix.build_type }}
build_type_suffix: ${{ matrix.build_type_suffix }}
exe_ext: ${{ matrix.exe_ext }}
- name: Attach artifacts to release
if: matrix.build_type == 'Release' && matrix.lib_type == 'static'
uses: dwenegar/upload-release-assets@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
release_id: ${{ needs.draft_release.outputs.release_id }}
assets_path: artifacts
test_supported_linux_distros:
# Doesn't really "need" it, but let's not waste time on a series of docker
# builds just to cancel it because of a linter error.
needs: lint
name: Test builds on all supported Linux distros (using docker)
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ github.event.inputs.ref || github.ref }}
- name: Install depot tools
shell: bash
run: |
git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
touch depot_tools/.disable_auto_update
echo "${GITHUB_WORKSPACE}/depot_tools" >> $GITHUB_PATH
- name: Setup gclient
shell: bash
run: |
gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
# NOTE: the docker tests will do gclient runhooks, so skip hooks here.
gclient sync --nohooks
- name: Test all distros
shell: bash
run: ./src/packager/testing/dockers/test_dockers.sh
publish_release:
name: Publish GitHub release
needs: [draft_release, build_and_test]
runs-on: ubuntu-latest
steps:
- name: Publish release
uses: eregon/publish-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
release_id: ${{ needs.draft_release.outputs.release_id }}

62
.github/workflows/lint.yaml vendored Normal file
View File

@ -0,0 +1,62 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A workflow to lint Shaka Packager.
name: Lint
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
ref: ${{ inputs.ref }}
# We must use 'fetch-depth: 2', or else the linter won't have another
# revision to compare to.
fetch-depth: 2
- name: Lint
shell: bash
run: |
wget https://raw.githubusercontent.com/llvm-mirror/clang/master/tools/clang-format/git-clang-format
sudo install -m 755 git-clang-format /usr/local/bin/git-clang-format
rm git-clang-format
python3 -m pip install --upgrade pylint==2.8.3
# NOTE: Must use base.sha instead of base.ref, since we don't have
# access to the branch name that base.ref would give us.
# NOTE: Must also use fetch-depth: 2 in actions/checkout to have access
# to the base ref for comparison.
packager/tools/git/check_formatting.py \
--binary /usr/bin/clang-format \
${{ github.event.pull_request.base.sha || 'HEAD^' }}
packager/tools/git/check_pylint.py

View File

@ -30,7 +30,6 @@ jobs:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ env.TARGET_REF }}
- name: Setup NodeJS
@ -40,7 +39,7 @@ jobs:
- name: Set package name and version
run: |
cd src/npm
cd npm
sed package.json -i \
-e 's/"name": ""/"name": "${{ secrets.NPM_PACKAGE_NAME }}"/' \
-e 's/"version": ""/"version": "${{ env.TARGET_REF }}"/'
@ -49,6 +48,6 @@ jobs:
uses: JS-DevTools/npm-publish@v1
with:
token: ${{ secrets.NPM_CI_TOKEN }}
package: src/npm/package.json
package: npm/package.json
check-version: false
access: public

67
.github/workflows/pr.yaml vendored Normal file
View File

@ -0,0 +1,67 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Build and Test PR
# Builds and tests on all combinations of OS, build type, and library type.
# Also builds the docs.
#
# Runs when a pull request is opened or updated.
#
# Can also be run manually for debugging purposes.
on:
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
inputs:
ref:
description: "The ref to build and test."
required: False
jobs:
lint:
name: Lint
uses: ./.github/workflows/lint.yaml
with:
ref: ${{ github.event.inputs.ref || github.ref }}
build_and_test:
needs: lint
name: Build and test
uses: ./.github/workflows/build.yaml
with:
ref: ${{ github.event.inputs.ref || github.ref }}
build_docs:
needs: lint
name: Build docs
uses: ./.github/workflows/build-docs.yaml
with:
ref: ${{ github.event.inputs.ref || github.ref }}
official_docker_image:
needs: lint
name: Official Docker image
uses: ./.github/workflows/docker-image.yaml
with:
ref: ${{ github.event.inputs.ref || github.ref }}
test_supported_linux_distros:
# Doesn't really "need" it, but let's not waste time on a series of docker
# builds just to cancel it because of a linter error.
needs: lint
name: Test Linux distros
uses: ./.github/workflows/test-linux-distros.yaml
with:
ref: ${{ github.event.inputs.ref || github.ref }}

View File

@ -0,0 +1,80 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# A workflow to test building in various Linux distros.
name: Test Linux Distros
# Runs when called from another workflow.
on:
workflow_call:
inputs:
ref:
required: true
type: string
# By default, run all commands in a bash shell. On Windows, the default would
# otherwise be powershell.
defaults:
run:
shell: bash
jobs:
# Configure the build matrix based on files in the repo.
matrix_config:
name: Matrix config
runs-on: ubuntu-latest
outputs:
MATRIX: ${{ steps.configure.outputs.MATRIX }}
steps:
- uses: actions/checkout@v2
with:
ref: ${{ inputs.ref }}
- name: Configure Build Matrix
id: configure
shell: node {0}
run: |
const fs = require('fs');
const files = fs.readdirSync('packager/testing/dockers/');
const matrix = files.map((file) => {
return { os_name: file.replace('_Dockerfile', '') };
});
// Output a JSON object consumed by the build matrix below.
console.log(`::set-output name=MATRIX::${ JSON.stringify(matrix) }`);
// Log the outputs, for the sake of debugging this script.
console.log({matrix});
# Build each dockerfile in parallel in a different CI job.
build:
needs: matrix_config
strategy:
# Let other matrix entries complete, so we have all results on failure
# instead of just the first failure.
fail-fast: false
matrix:
include: ${{ fromJSON(needs.matrix_config.outputs.MATRIX) }}
name: ${{ matrix.os_name }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: ${{ inputs.ref }}
submodules: true
- name: Build in Docker
run: ./packager/testing/test_dockers.sh "${{ matrix.os_name }}"

View File

@ -31,7 +31,6 @@ jobs:
- name: Checkout code
uses: actions/checkout@v2
with:
path: src
ref: ${{ steps.ref.outputs.ref }}
- name: Set up Python
@ -40,7 +39,7 @@ jobs:
python-version: 3.8
- name: Build docs
uses: ./src/.github/workflows/custom-actions/build-docs
uses: ./.github/workflows/custom-actions/build-docs
- name: Deploy to gh-pages branch
uses: peaceiris/actions-gh-pages@v3

20
.gitignore vendored
View File

@ -5,33 +5,17 @@
*/.vs/*
*~
.DS_store
.cache
.cproject
.project
.pydevproject
.idea
.repo
.settings
/out*
/packager/base/
/packager/build/
/packager/buildtools/third_party/libc++/trunk/
/packager/buildtools/third_party/libc++abi/trunk/
build/
/packager/docs/
/packager/testing/gmock/
/packager/testing/gtest/
/packager/third_party/binutils/
/packager/third_party/boringssl/src/
/packager/third_party/curl/source/
/packager/third_party/gflags/src/
/packager/third_party/gold/
/packager/third_party/icu/
/packager/third_party/libpng/src/
/packager/third_party/libwebm/src/
/packager/third_party/llvm-build/
/packager/third_party/modp_b64/
/packager/third_party/tcmalloc/
/packager/third_party/yasm/source/patched-yasm/
/packager/third_party/zlib/
/packager/tools/clang/
/packager/tools/gyp/
/packager/tools/valgrind/

18
.gitmodules vendored
View File

@ -0,0 +1,18 @@
[submodule "packager/testing/googletest"]
path = packager/third_party/googletest/source
url = https://github.com/google/googletest
[submodule "packager/third_party/abseil-cpp"]
path = packager/third_party/abseil-cpp/source
url = https://github.com/abseil/abseil-cpp
[submodule "packager/third_party/curl"]
path = packager/third_party/curl/source
url = https://github.com/curl/curl
[submodule "packager/third_party/glog"]
path = packager/third_party/glog/source
url = https://github.com/google/glog
[submodule "packager/third_party/json"]
path = packager/third_party/json
url = https://github.com/nlohmann/json
[submodule "packager/third_party/mbedtls"]
path = packager/third_party/mbedtls/source
url = https://github.com/Mbed-TLS/mbedtls

45
CMakeLists.txt Normal file
View File

@ -0,0 +1,45 @@
# Copyright 2022 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# Root-level CMake build file.
# Project name. May not contain spaces. Versioning is managed elsewhere.
cmake_policy(SET CMP0048 NEW)
project(shaka-packager VERSION "")
# Minimum CMake version.
# We could require as low as 3.10, but glog requires 3.16.
cmake_minimum_required(VERSION 3.16)
# The only build option for Shaka Packager is whether to build a shared
# libpackager library. By default, don't.
option(LIBPACKAGER_SHARED "Build libpackager as a shared library" OFF)
# No in-source builds allowed.
set(CMAKE_DISABLE_SOURCE_CHANGES ON)
set(CMAKE_DISABLE_IN_SOURCE_BUILD ON)
# Minimum C++ version.
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
# Minimum GCC version, if using GCC.
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
# Require at least GCC 9. Before GCC 9, C++17 filesystem libs don't link.
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 9)
message(FATAL_ERROR "GCC version must be at least 9! (Found ${CMAKE_CXX_COMPILER_VERSION})")
endif()
endif()
# Global include paths.
# Project root, to reference packager/foo/bar/...
include_directories(.)
# Enable CMake's test infrastructure.
enable_testing()
# Subdirectories with their own CMakeLists.txt
add_subdirectory(packager)

92
DEPS
View File

@ -1,92 +0,0 @@
# Copyright 2014 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
#
# Packager dependencies.
vars = {
"chromium_git": "https://chromium.googlesource.com",
"github": "https://github.com",
}
deps = {
"src/packager/base":
Var("chromium_git") + "/chromium/src/base@a34eabec0d807cf03dc8cfc1a6240156ac2bbd01", #409071
"src/packager/build":
Var("chromium_git") + "/chromium/src/build@f0243d787961584ac95a86e7dae897b9b60ea674", #409966
"src/packager/testing/gmock":
Var("chromium_git") + "/external/googlemock@0421b6f358139f02e102c9c332ce19a33faf75be", #566
"src/packager/testing/gtest":
Var("chromium_git") + "/external/github.com/google/googletest@6f8a66431cb592dad629028a50b3dd418a408c87",
# Make sure the version matches the one in
# src/packager/third_party/boringssl, which contains perl generated files.
"src/packager/third_party/boringssl/src":
Var("github") + "/google/boringssl@76918d016414bf1d71a86d28239566fbcf8aacf0",
"src/packager/third_party/curl/source":
Var("github") + "/curl/curl@62c07b5743490ce373910f469abc8cdc759bec2b", #7.57.0
"src/packager/third_party/gflags/src":
Var("chromium_git") + "/external/github.com/gflags/gflags@03bebcb065c83beff83d50ae025a55a4bf94dfca",
# Required by libxml.
"src/packager/third_party/icu":
Var("chromium_git") + "/chromium/deps/icu@ef5c735307d0f86c7622f69620994c9468beba99",
"src/packager/third_party/libpng/src":
Var("github") + "/glennrp/libpng@a40189cf881e9f0db80511c382292a5604c3c3d1",
"src/packager/third_party/libwebm/src":
Var("chromium_git") + "/webm/libwebm@d6af52a1e688fade2e2d22b6d9b0c82f10d38e0b",
"src/packager/third_party/modp_b64":
Var("chromium_git") + "/chromium/src/third_party/modp_b64@aae60754fa997799e8037f5e8ca1f56d58df763d", #405651
"src/packager/third_party/tcmalloc/chromium":
Var("chromium_git") + "/chromium/src/third_party/tcmalloc/chromium@58a93bea442dbdcb921e9f63e9d8b0009eea8fdb", #374449
"src/packager/third_party/zlib":
Var("chromium_git") + "/chromium/src/third_party/zlib@830b5c25b5fbe37e032ea09dd011d57042dd94df", #408157
"src/packager/tools/gyp":
Var("chromium_git") + "/external/gyp@caa60026e223fc501e8b337fd5086ece4028b1c6",
}
deps_os = {
"win": {
# Required by boringssl.
"src/packager/third_party/yasm/source/patched-yasm":
Var("chromium_git") + "/chromium/deps/yasm/patched-yasm.git@7da28c6c7c6a1387217352ce02b31754deb54d2a",
},
}
hooks = [
{
# When using CC=clang CXX=clang++, there is a binutils version check that
# does not work correctly in common.gypi. Since we are stuck with a very
# old version of chromium/src/build, there is nothing to do but patch it to
# remove the check. Thankfully, this version number does not control
# anything critical in the build settings as far as we can tell.
'name': 'patch-binutils-version-check',
'pattern': '.',
'action': ['sed', '-e', 's/<!pymod_do_main(compiler_version target assembler)/0/', '-i.bk', 'src/packager/build/common.gypi'],
},
{
# A change to a .gyp, .gypi, or to GYP itself should run the generator.
"pattern": ".",
"action": ["python", "src/gyp_packager.py", "--depth=src/packager"],
},
{
# Update LASTCHANGE.
'name': 'lastchange',
'pattern': '.',
'action': ['python', 'src/packager/build/util/lastchange.py',
'-o', 'src/packager/build/util/LASTCHANGE'],
},
]

View File

@ -1,46 +1,30 @@
FROM alpine:3.11 as builder
FROM alpine:3.12 as builder
# Install utilities, libraries, and dev tools.
RUN apk add --no-cache \
bash curl \
bsd-compat-headers c-ares-dev linux-headers \
build-base git ninja python2 python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
WORKDIR /
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH $PATH:/depot_tools
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
# Alpine uses musl which does not have mallinfo defined in malloc.h. Define the
# structure to workaround a Chromium base bug.
RUN sed -i \
'/malloc_usable_size/a \\nstruct mallinfo {\n int arena;\n int hblkhd;\n int uordblks;\n};' \
/usr/include/malloc.h
ENV GYP_DEFINES='musl=1'
build-base cmake git python3
# Build shaka-packager from the current directory, rather than what has been
# merged.
WORKDIR shaka_packager
RUN gclient config https://github.com/shaka-project/shaka-packager.git --name=src --unmanaged
COPY . src
RUN gclient sync --force
RUN ninja -C src/out/Release
WORKDIR shaka-packager
COPY . /shaka-packager/
RUN mkdir build
RUN cmake -S . -B build
RUN make -C build
# Copy only result binaries to our final image.
FROM alpine:3.11
RUN apk add --no-cache libstdc++ python
COPY --from=builder /shaka_packager/src/out/Release/packager \
/shaka_packager/src/out/Release/mpd_generator \
/shaka_packager/src/out/Release/pssh-box.py \
/usr/bin/
FROM alpine:3.12
RUN apk add --no-cache libstdc++ python3
# TODO(joeyparrish): Copy binaries when build system is complete
#COPY --from=builder /shaka-packager/build/packager \
# /shaka-packager/build/mpd_generator \
# /shaka-packager/build/pssh-box.py \
# /usr/bin/
# Copy pyproto directory, which is needed by pssh-box.py script. This line
# cannot be combined with the line above as Docker's copy command skips the
# directory itself. See https://github.com/moby/moby/issues/15858 for details.
COPY --from=builder /shaka_packager/src/out/Release/pyproto /usr/bin/pyproto
# TODO(joeyparrish): Copy binaries when build system is complete
#COPY --from=builder /shaka-packager/build/pyproto /usr/bin/pyproto

View File

@ -58,7 +58,7 @@ PROJECT_LOGO =
# entered, it will be relative to the location where doxygen was started. If
# left blank the current directory will be used.
OUTPUT_DIRECTORY = out/doxygen
OUTPUT_DIRECTORY = build/doxygen
# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 4096 sub-
# directories (in 2 levels) under the output directory of each output format and

View File

@ -6,7 +6,7 @@ SPHINXOPTS =
SPHINXBUILD = python3 -msphinx
SPHINXPROJ = ShakaPackager
SOURCEDIR = source
BUILDDIR = ../out/sphinx
BUILDDIR = ../build/sphinx
# Put it first so that "make" without argument is like "make help".
help:

View File

@ -1,124 +0,0 @@
#!/usr/bin/python3
#
# Copyright 2014 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
"""This script wraps gyp and sets up build environments.
Build instructions:
1. Setup gyp: ./gyp_packager.py or use gclient runhooks
Ninja is the default build system. User can also change to make by
overriding GYP_GENERATORS to make, i.e.
"GYP_GENERATORS='make' gclient runhooks".
2. The first step generates the make files but does not start the
build process. Ninja is the default build system. Refer to Ninja
manual on how to do the build.
Common syntaxes: ninja -C out/{Debug/Release} [Module]
Module is optional. If not specified, build everything.
Step 1 is only required if there is any gyp file change. Otherwise, you
may just run ninja.
"""
import os
import sys
checkout_dir = os.path.dirname(os.path.realpath(__file__))
src_dir = os.path.join(checkout_dir, 'packager')
# Workaround the dynamic path.
# pylint: disable=wrong-import-position
sys.path.insert(0, os.path.join(src_dir, 'build'))
import gyp_helper
sys.path.insert(0, os.path.join(src_dir, 'tools', 'gyp', 'pylib'))
import gyp
if __name__ == '__main__':
args = sys.argv[1:]
# Allow src/.../chromium.gyp_env to define GYP variables.
gyp_helper.apply_chromium_gyp_env()
# If we didn't get a gyp file, then fall back to assuming 'packager.gyp' from
# the same directory as the script.
if not any(arg.endswith('.gyp') for arg in args):
args.append(os.path.join(src_dir, 'packager.gyp'))
# Always include Chromium's common.gypi and our common.gypi.
args.extend([
'-I' + os.path.join(src_dir, 'build', 'common.gypi'),
'-I' + os.path.join(src_dir, 'common.gypi')
])
# Set these default GYP_DEFINES if user does not set the value explicitly.
_DEFAULT_DEFINES = {'test_isolation_mode': 'noop',
'use_custom_libcxx': 0,
'use_glib': 0,
'use_openssl': 1,
'use_sysroot': 0,
'use_x11': 0,
'linux_use_bundled_binutils': 0,
'linux_use_bundled_gold': 0,
'linux_use_gold_flags': 0,
'clang': 0,
'host_clang': 0,
'clang_xcode': 1,
'use_allocator': 'none',
'mac_deployment_target': '10.10',
'use_experimental_allocator_shim': 0,
'clang_use_chrome_plugins': 0}
gyp_defines_str = os.environ.get('GYP_DEFINES', '')
user_gyp_defines_map = {}
for term in gyp_defines_str.split(' '):
if term:
key, value = term.strip().split('=')
user_gyp_defines_map[key] = value
for key, value in _DEFAULT_DEFINES.items():
if key not in user_gyp_defines_map:
gyp_defines_str += ' {0}={1}'.format(key, value)
os.environ['GYP_DEFINES'] = gyp_defines_str.strip()
# Default to ninja, but only if no generator has explicitly been set.
if 'GYP_GENERATORS' not in os.environ:
os.environ['GYP_GENERATORS'] = 'ninja'
# By default, don't download our own toolchain for Windows.
if 'DEPOT_TOOLS_WIN_TOOLCHAIN' not in os.environ:
os.environ['DEPOT_TOOLS_WIN_TOOLCHAIN'] = '0'
# There shouldn't be a circular dependency relationship between .gyp files,
# but in Chromium's .gyp files, on non-Mac platforms, circular relationships
# currently exist. The check for circular dependencies is currently
# bypassed on other platforms, but is left enabled on the Mac, where a
# violation of the rule causes Xcode to misbehave badly.
if 'xcode' not in os.environ['GYP_GENERATORS']:
args.append('--no-circular-check')
# TODO(kqyang): Find a better way to handle the depth. This workaround works
# only if this script is executed in 'src' directory.
if not any('--depth' in arg for arg in args):
args.append('--depth=packager')
if 'output_dir=' not in os.environ.get('GYP_GENERATOR_FLAGS', ''):
output_dir = os.path.join(checkout_dir, 'out')
gyp_generator_flags = 'output_dir="' + output_dir + '"'
if os.environ.get('GYP_GENERATOR_FLAGS'):
os.environ['GYP_GENERATOR_FLAGS'] += ' ' + gyp_generator_flags
else:
os.environ['GYP_GENERATOR_FLAGS'] = gyp_generator_flags
print('Updating projects from gyp files...')
sys.stdout.flush()
# Off we go...
sys.exit(gyp.main(args))

47
packager/CMakeLists.txt Normal file
View File

@ -0,0 +1,47 @@
# Copyright 2022 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
# Packager CMake build file.
# Build static libs by default, or shared if LIBPACKAGER_SHARED is defined.
if(LIBPACKAGER_SHARED)
add_definitions(-DSHARED_LIBRARY_BUILD)
endif()
# Global C++ flags.
if(MSVC)
# Warning level 4 and all warnings as errors.
add_compile_options(/W4 /WX)
# Silence a warning from an absl header about alignment in boolean flags.
add_compile_options(/wd4324)
# Silence a warning about STL types in exported classes.
add_compile_options(/wd4251)
# Packager's macro for Windows-specific code.
add_definitions(-DOS_WIN)
# Suppress Microsoft's min() and max() macros, which will conflict with
# things like std::numeric_limits::max() and std::min().
add_definitions(-DNOMINMAX)
# Define this so that we can use fopen() without warnings.
add_definitions(-D_CRT_SECURE_NO_WARNINGS)
# Don't automatically include winsock.h in windows.h. This is needed for us
# to use winsock2.h, which contains definitions that conflict with the
# ancient winsock 1.1 interface in winsock.h.
add_definitions(-DWIN32_LEAN_AND_MEAN)
else()
# Lots of warnings and all warnings as errors.
# Note that we can't use -Wpedantic due to absl's int128 headers.
add_compile_options(-Wall -Wextra -Werror)
endif()
# Subdirectories with their own CMakeLists.txt, all of whose targets are built.
add_subdirectory(file)
add_subdirectory(kv_pairs)
add_subdirectory(status)
add_subdirectory(third_party)
add_subdirectory(tools)
add_subdirectory(version)

View File

@ -1,131 +0,0 @@
# Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import("//build/config/sanitizers/sanitizers.gni")
import("//build/toolchain/toolchain.gni")
# Used by libc++ and libc++abi.
config("config") {
defines = [ "LIBCXX_BUILDING_LIBCXXABI" ]
cflags = [
"-fPIC",
"-fstrict-aliasing",
"-pthread",
]
cflags_cc = [
"-nostdinc++",
"-isystem" + rebase_path("trunk/include", root_build_dir),
"-isystem" + rebase_path("../libc++abi/trunk/include", root_build_dir),
"-std=c++11",
]
}
shared_library("libc++") {
sources = [
"trunk/src/algorithm.cpp",
"trunk/src/any.cpp",
"trunk/src/bind.cpp",
"trunk/src/chrono.cpp",
"trunk/src/condition_variable.cpp",
"trunk/src/debug.cpp",
"trunk/src/exception.cpp",
"trunk/src/future.cpp",
"trunk/src/hash.cpp",
"trunk/src/ios.cpp",
"trunk/src/iostream.cpp",
"trunk/src/locale.cpp",
"trunk/src/memory.cpp",
"trunk/src/mutex.cpp",
"trunk/src/new.cpp",
"trunk/src/optional.cpp",
"trunk/src/random.cpp",
"trunk/src/regex.cpp",
"trunk/src/shared_mutex.cpp",
"trunk/src/stdexcept.cpp",
"trunk/src/string.cpp",
"trunk/src/strstream.cpp",
"trunk/src/system_error.cpp",
"trunk/src/thread.cpp",
"trunk/src/typeinfo.cpp",
"trunk/src/utility.cpp",
"trunk/src/valarray.cpp",
]
configs -= [
"//build/config/compiler:chromium_code",
"//build/config/compiler:no_rtti",
"//build/config/gcc:no_exceptions",
"//build/config/gcc:symbol_visibility_hidden",
]
configs += [
":config",
"//build/config/compiler:no_chromium_code",
"//build/config/compiler:rtti",
"//build/config/gcc:symbol_visibility_default",
"//build/config/sanitizers:sanitizer_options_link_helper",
]
ldflags = [ "-nodefaultlibs" ]
# TODO(GYP): Remove "-pthread" from ldflags.
# -nodefaultlibs turns -pthread into a no-op, causing an unused argument
# warning. Explicitly link with -lpthread instead.
libs = [
"m",
]
if (!is_mac) {
libs += [
"c",
"gcc_s",
"pthread",
"rt",
]
}
# libc++abi is linked statically into libc++.so. This allows us to get both
# libc++ and libc++abi by passing '-stdlib=libc++'. If libc++abi was a
# separate DSO, we'd have to link against it explicitly.
deps = [
"//buildtools/third_party/libc++abi",
]
if (is_mac && using_sanitizer) {
# -nodefaultlibs on mac doesn't add any kind of runtime libraries.
# These runtimes have to be added manually.
lib_dirs = [ "//third_party/llvm-build/Release+Asserts/lib/clang/$clang_version/lib/darwin" ]
if (is_asan) {
libs += [ "clang_rt.asan_osx_dynamic" ]
}
}
}
group("libcxx_proxy") {
deps = [
":libc++",
]
public_configs = [ ":link_helper" ]
}
# This config is only used by binaries and shared library targets.
# //build/config/sanitizers:default_sanitizer_flags sets the include paths for
# everything else.
config("link_helper") {
ldflags = [
"-stdlib=libc++",
]
if (!is_mac) {
ldflags += [
# Normally the generator takes care of RPATH. Our case is special because
# the generator is unaware of the libc++.so dependency. Note that setting
# RPATH here is a potential security issue. See the following for another
# example of this issue: https://code.google.com/p/gyp/issues/detail?id=315
"-Wl,-rpath,\$ORIGIN/",
]
}
lib_dirs = [ root_build_dir ]
}

View File

@ -1,13 +0,0 @@
Name: libcxx
Short Name: libc++
URL: http://libcxx.llvm.org/
Version: 1.0
License: MIT, University of Illinois/NCSA Open Source License
License File: trunk/LICENSE.TXT
Security Critical: no
Description:
libc++ for Chromium.
This is intended for instrumented builds, not for release.
There was no security review for this library.

View File

@ -1,126 +0,0 @@
# Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'variables': {
'libcxx_root': '../../packager/buildtools/third_party/libc++',
},
'targets': [
{
'target_name': 'libcxx_proxy',
'type': 'none',
'toolsets': ['host', 'target'],
'dependencies=': [
'libc++',
],
# Do not add dependency on libc++.so to dependents of this target. We
# don't want to pass libc++.so on the command line to the linker, as that
# would cause it to be linked into C executables which don't need it.
# Instead, we supply -stdlib=libc++ and let the clang driver decide.
'dependencies_traverse': 0,
'variables': {
# Don't add this target to the dependencies of targets with type=none.
'link_dependency': 1,
},
'all_dependent_settings': {
'target_conditions': [
['_type!="none"', {
'cflags_cc': [
'-nostdinc++',
'-isystem<(libcxx_root)/trunk/include',
'-isystem<(libcxx_root)/../libc++abi/trunk/include',
],
'ldflags': [
'-stdlib=libc++',
# Normally the generator takes care of RPATH. Our case is special
# because the generator is unaware of the libc++.so dependency.
# Note that setting RPATH here is a potential security issue. See:
# https://code.google.com/p/gyp/issues/detail?id=315
'-Wl,-rpath,\$$ORIGIN/lib/',
],
'library_dirs': [
'<(PRODUCT_DIR)/lib/',
],
}],
],
},
},
{
'target_name': 'libc++',
'type': 'shared_library',
'toolsets': ['host', 'target'],
'dependencies=': [
# libc++abi is linked statically into libc++.so. This allows us to get
# both libc++ and libc++abi by passing '-stdlib=libc++'. If libc++abi
# was a separate DSO, we'd have to link against it explicitly.
'../libc++abi/libc++abi.gyp:libc++abi',
],
'sources': [
'trunk/src/algorithm.cpp',
'trunk/src/any.cpp',
'trunk/src/bind.cpp',
'trunk/src/chrono.cpp',
'trunk/src/condition_variable.cpp',
'trunk/src/debug.cpp',
'trunk/src/exception.cpp',
'trunk/src/future.cpp',
'trunk/src/hash.cpp',
'trunk/src/ios.cpp',
'trunk/src/iostream.cpp',
'trunk/src/locale.cpp',
'trunk/src/memory.cpp',
'trunk/src/mutex.cpp',
'trunk/src/new.cpp',
'trunk/src/optional.cpp',
'trunk/src/random.cpp',
'trunk/src/regex.cpp',
'trunk/src/shared_mutex.cpp',
'trunk/src/stdexcept.cpp',
'trunk/src/string.cpp',
'trunk/src/strstream.cpp',
'trunk/src/system_error.cpp',
'trunk/src/thread.cpp',
'trunk/src/typeinfo.cpp',
'trunk/src/utility.cpp',
'trunk/src/valarray.cpp',
],
'cflags': [
'-fPIC',
'-fstrict-aliasing',
'-pthread',
],
'cflags_cc': [
'-nostdinc++',
'-isystem<(libcxx_root)/trunk/include',
'-isystem<(libcxx_root)/../libc++abi/trunk/include',
'-std=c++11',
],
'cflags_cc!': [
'-fno-exceptions',
'-fno-rtti',
],
'cflags!': [
'-fvisibility=hidden',
],
'defines': [
'LIBCXX_BUILDING_LIBCXXABI',
],
'ldflags': [
'-nodefaultlibs',
],
'ldflags!': [
# -nodefaultlibs turns -pthread into a no-op, causing an unused argument
# warning. Explicitly link with -lpthread instead.
'-pthread',
],
'libraries': [
'-lc',
'-lgcc_s',
'-lm',
'-lpthread',
'-lrt',
],
},
]
}

View File

@ -1,47 +0,0 @@
# Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
config("libc++abi_warnings") {
if (is_clang) {
# http://llvm.org/PR25978
cflags = [ "-Wno-unused-function" ]
}
}
static_library("libc++abi") {
sources = [
"trunk/src/abort_message.cpp",
"trunk/src/cxa_aux_runtime.cpp",
"trunk/src/cxa_default_handlers.cpp",
"trunk/src/cxa_demangle.cpp",
"trunk/src/cxa_exception.cpp",
"trunk/src/cxa_exception_storage.cpp",
"trunk/src/cxa_guard.cpp",
"trunk/src/cxa_handlers.cpp",
"trunk/src/cxa_new_delete.cpp",
"trunk/src/cxa_personality.cpp",
"trunk/src/cxa_thread_atexit.cpp",
"trunk/src/cxa_unexpected.cpp",
"trunk/src/cxa_vector.cpp",
"trunk/src/cxa_virtual.cpp",
"trunk/src/exception.cpp",
"trunk/src/private_typeinfo.cpp",
"trunk/src/stdexcept.cpp",
"trunk/src/typeinfo.cpp",
]
configs -= [
"//build/config/compiler:chromium_code",
"//build/config/compiler:no_rtti",
"//build/config/gcc:no_exceptions",
"//build/config/gcc:symbol_visibility_hidden",
]
configs += [
"//build/config/compiler:no_chromium_code",
"//build/config/compiler:rtti",
"//build/config/gcc:symbol_visibility_default",
"//buildtools/third_party/libc++:config",
# Must be after no_chromium_code.
":libc++abi_warnings",
]
}

View File

@ -1,13 +0,0 @@
Name: libcxxabi
Short Name: libc++abi
URL: http://libcxxabi.llvm.org/
Version: 1.0
License: MIT, University of Illinois/NCSA Open Source License
License File: trunk/LICENSE.TXT
Security Critical: no
Description:
libc++abi for Chromium.
This is intended for instrumented builds, not for release.
There was no security review for this library.

View File

@ -1,58 +0,0 @@
# Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'targets': [
{
'target_name': 'libc++abi',
'type': 'static_library',
'toolsets': ['host', 'target'],
'dependencies=': [],
'sources': [
'trunk/src/abort_message.cpp',
'trunk/src/cxa_aux_runtime.cpp',
'trunk/src/cxa_default_handlers.cpp',
'trunk/src/cxa_demangle.cpp',
'trunk/src/cxa_exception.cpp',
'trunk/src/cxa_exception_storage.cpp',
'trunk/src/cxa_guard.cpp',
'trunk/src/cxa_handlers.cpp',
'trunk/src/cxa_new_delete.cpp',
'trunk/src/cxa_personality.cpp',
'trunk/src/cxa_thread_atexit.cpp',
'trunk/src/cxa_unexpected.cpp',
'trunk/src/cxa_vector.cpp',
'trunk/src/cxa_virtual.cpp',
'trunk/src/exception.cpp',
'trunk/src/private_typeinfo.cpp',
'trunk/src/stdexcept.cpp',
'trunk/src/typeinfo.cpp',
],
'include_dirs': [
'trunk/include',
'../libc++/trunk/include'
],
'variables': {
'clang_warning_flags': [
# http://llvm.org/PR25978
'-Wno-unused-function',
],
},
'cflags': [
'-fPIC',
'-fstrict-aliasing',
'-nostdinc++',
'-pthread',
'-std=c++11',
],
'cflags_cc!': [
'-fno-exceptions',
'-fno-rtti',
],
'cflags!': [
'-fvisibility=hidden',
],
},
]
}

39
packager/common.h Normal file
View File

@ -0,0 +1,39 @@
// Copyright 2022 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_COMMON_H_
#define PACKAGER_COMMON_H_
#include <type_traits>
#include "absl/base/macros.h"
namespace shaka {
/// A macro to disable copying and assignment. Usage:
/// class Foo {
/// private:
/// DISALLOW_COPY_AND_ASSIGN(Foo);
/// }
#define DISALLOW_COPY_AND_ASSIGN(TypeName) \
TypeName(const TypeName&) = delete; \
void operator=(const TypeName&) = delete;
/// ABSL_ARRAYSIZE works just like the arraysize macro we used to use from
/// Chromium. To ease porting, define arraysize() as ABSL_ARRAYSIZE().
#define arraysize(a) ABSL_ARRAYSIZE(a)
/// A macro to declare that you intentionally did not use a parameter. Useful
/// when implementing abstract interfaces.
#define UNUSED(x) (void)(x)
/// A macro to declare that you intentionally did not implement a method.
/// You can use the insertion operator to add specific logs to this.
#define NOTIMPLEMENTED() LOG(ERROR) << "NOTIMPLEMENTED: "
} // namespace shaka
#endif // PACKAGER_COMMON_H_

View File

@ -0,0 +1,46 @@
# Copyright 2022 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
add_library(file STATIC
callback_file.cc
file.cc
file_util.cc
http_file.cc
io_cache.cc
local_file.cc
memory_file.cc
thread_pool.cc
threaded_io_file.cc
udp_file.cc
udp_options.cc)
target_link_libraries(file
absl::base
absl::flags
absl::str_format
absl::strings
absl::synchronization
absl::time
libcurl
glog
kv_pairs
status
version)
add_executable(file_unittest
callback_file_unittest.cc
file_unittest.cc
file_util_unittest.cc
http_file_unittest.cc
io_cache_unittest.cc
memory_file_unittest.cc
udp_options_unittest.cc)
target_link_libraries(file_unittest
file
gmock
gtest
gtest_main
nlohmann_json)
add_test(NAME file_unittest COMMAND file_unittest)

View File

@ -6,7 +6,8 @@
#include "packager/file/callback_file.h"
#include "packager/base/logging.h"
#include "glog/logging.h"
#include "packager/common.h"
namespace shaka {
@ -47,11 +48,13 @@ bool CallbackFile::Flush() {
}
bool CallbackFile::Seek(uint64_t position) {
UNUSED(position);
VLOG(1) << "CallbackFile does not support Seek().";
return false;
}
bool CallbackFile::Tell(uint64_t* position) {
UNUSED(position);
VLOG(1) << "CallbackFile does not support Tell().";
return false;
}

View File

@ -94,8 +94,11 @@ TEST(CallbackFileTest, ReadFailed) {
File::MakeCallbackFileName(callback_params, kBufferLabel);
EXPECT_CALL(mock_read_func, Call(StrEq(kBufferLabel), _, _))
.WillOnce(WithArgs<1, 2>(
Invoke([](void* buffer, uint64_t size) { return kFileError; })));
.WillOnce(WithArgs<1, 2>(Invoke([](void* buffer, uint64_t size) {
UNUSED(buffer);
UNUSED(size);
return kFileError;
})));
std::unique_ptr<File, FileCloser> reader(File::Open(file_name.c_str(), "r"));
ASSERT_TRUE(reader);

View File

@ -6,28 +6,32 @@
#include "packager/file/file.h"
#include <gflags/gflags.h>
#include <inttypes.h>
#include <algorithm>
#include <filesystem>
#include <memory>
#include "packager/base/files/file_util.h"
#include "packager/base/logging.h"
#include "packager/base/strings/string_number_conversions.h"
#include "packager/base/strings/string_piece.h"
#include "packager/base/strings/stringprintf.h"
#include "absl/flags/flag.h"
#include "absl/strings/numbers.h"
#include "absl/strings/str_format.h"
#include "glog/logging.h"
#include "packager/file/callback_file.h"
#include "packager/file/file_util.h"
#include "packager/file/http_file.h"
#include "packager/file/local_file.h"
#include "packager/file/memory_file.h"
#include "packager/file/threaded_io_file.h"
#include "packager/file/udp_file.h"
#include "packager/file/http_file.h"
DEFINE_uint64(io_cache_size,
ABSL_FLAG(uint64_t,
io_cache_size,
32ULL << 20,
"Size of the threaded I/O cache, in bytes. Specify 0 to disable "
"threaded I/O.");
DEFINE_uint64(io_block_size,
ABSL_FLAG(uint64_t,
io_block_size,
1ULL << 16,
"Size of the block size used for threaded I/O, in bytes.");
@ -74,18 +78,20 @@ bool DeleteLocalFile(const char* file_name) {
bool WriteLocalFileAtomically(const char* file_name,
const std::string& contents) {
const base::FilePath file_path = base::FilePath::FromUTF8Unsafe(file_name);
const std::string dir_name = file_path.DirName().AsUTF8Unsafe();
const std::filesystem::path file_path(file_name);
const std::filesystem::path dir_path = file_path.parent_path();
std::string temp_file_name;
if (!TempFilePath(dir_name, &temp_file_name))
if (!TempFilePath(dir_path.string(), &temp_file_name))
return false;
if (!File::WriteStringToFile(temp_file_name.c_str(), contents))
return false;
base::File::Error replace_file_error = base::File::FILE_OK;
if (!base::ReplaceFile(base::FilePath::FromUTF8Unsafe(temp_file_name),
file_path, &replace_file_error)) {
std::error_code ec;
std::filesystem::rename(temp_file_name, file_name, ec);
if (ec) {
LOG(ERROR) << "Failed to replace file '" << file_name << "' with '"
<< temp_file_name << "', error: " << replace_file_error;
<< temp_file_name << "', error: " << ec;
return false;
}
return true;
@ -100,10 +106,12 @@ File* CreateUdpFile(const char* file_name, const char* mode) {
}
File* CreateHttpsFile(const char* file_name, const char* mode) {
UNUSED(mode); // TODO: choose method based on file mode
return new HttpFile(HttpMethod::kPut, std::string("https://") + file_name);
}
File* CreateHttpFile(const char* file_name, const char* mode) {
UNUSED(mode); // TODO: choose method based on file mode
return new HttpFile(HttpMethod::kPut, std::string("http://") + file_name);
}
@ -130,14 +138,14 @@ static const FileTypeInfo kFileTypeInfo[] = {
{kHttpsFilePrefix, &CreateHttpsFile, nullptr, nullptr},
};
base::StringPiece GetFileTypePrefix(base::StringPiece file_name) {
std::string_view GetFileTypePrefix(std::string_view file_name) {
size_t pos = file_name.find("://");
return (pos == std::string::npos) ? "" : file_name.substr(0, pos + 3);
}
const FileTypeInfo* GetFileTypeInfo(base::StringPiece file_name,
base::StringPiece* real_file_name) {
base::StringPiece file_type_prefix = GetFileTypePrefix(file_name);
const FileTypeInfo* GetFileTypeInfo(std::string_view file_name,
std::string_view* real_file_name) {
std::string_view file_type_prefix = GetFileTypePrefix(file_name);
for (const FileTypeInfo& file_type : kFileTypeInfo) {
if (file_type_prefix == file_type.type) {
*real_file_name = file_name.substr(file_type_prefix.size());
@ -155,23 +163,25 @@ File* File::Create(const char* file_name, const char* mode) {
std::unique_ptr<File, FileCloser> internal_file(
CreateInternalFile(file_name, mode));
base::StringPiece file_type_prefix = GetFileTypePrefix(file_name);
std::string_view file_type_prefix = GetFileTypePrefix(file_name);
if (file_type_prefix == kMemoryFilePrefix ||
file_type_prefix == kCallbackFilePrefix) {
// Disable caching for memory and callback files.
return internal_file.release();
}
if (FLAGS_io_cache_size) {
if (absl::GetFlag(FLAGS_io_cache_size)) {
// Enable threaded I/O for "r", "w", and "a" modes only.
if (!strcmp(mode, "r")) {
return new ThreadedIoFile(std::move(internal_file),
ThreadedIoFile::kInputMode, FLAGS_io_cache_size,
FLAGS_io_block_size);
ThreadedIoFile::kInputMode,
absl::GetFlag(FLAGS_io_cache_size),
absl::GetFlag(FLAGS_io_block_size));
} else if (!strcmp(mode, "w") || !strcmp(mode, "a")) {
return new ThreadedIoFile(std::move(internal_file),
ThreadedIoFile::kOutputMode,
FLAGS_io_cache_size, FLAGS_io_block_size);
absl::GetFlag(FLAGS_io_cache_size),
absl::GetFlag(FLAGS_io_block_size));
}
}
@ -181,7 +191,7 @@ File* File::Create(const char* file_name, const char* mode) {
}
File* File::CreateInternalFile(const char* file_name, const char* mode) {
base::StringPiece real_file_name;
std::string_view real_file_name;
const FileTypeInfo* file_type = GetFileTypeInfo(file_name, &real_file_name);
DCHECK(file_type);
// Calls constructor for the derived File class.
@ -212,7 +222,7 @@ File* File::OpenWithNoBuffering(const char* file_name, const char* mode) {
bool File::Delete(const char* file_name) {
static bool logged = false;
base::StringPiece real_file_name;
std::string_view real_file_name;
const FileTypeInfo* file_type = GetFileTypeInfo(file_name, &real_file_name);
DCHECK(file_type);
if (file_type->delete_function) {
@ -288,7 +298,7 @@ bool File::WriteStringToFile(const char* file_name,
bool File::WriteFileAtomically(const char* file_name,
const std::string& contents) {
VLOG(2) << "File::WriteFileAtomically: " << file_name;
base::StringPiece real_file_name;
std::string_view real_file_name;
const FileTypeInfo* file_type = GetFileTypeInfo(file_name, &real_file_name);
DCHECK(file_type);
if (file_type->atomic_write_function)
@ -386,28 +396,15 @@ int64_t File::CopyFile(File* source, File* destination, int64_t max_copy) {
}
bool File::IsLocalRegularFile(const char* file_name) {
base::StringPiece real_file_name;
std::string_view real_file_name;
const FileTypeInfo* file_type = GetFileTypeInfo(file_name, &real_file_name);
DCHECK(file_type);
if (file_type->type != kLocalFilePrefix)
return false;
#if defined(OS_WIN)
const base::FilePath file_path(
base::FilePath::FromUTF8Unsafe(real_file_name));
const DWORD fileattr = GetFileAttributes(file_path.value().c_str());
if (fileattr == INVALID_FILE_ATTRIBUTES) {
LOG(ERROR) << "Failed to GetFileAttributes of " << file_path.value();
return false;
}
return (fileattr & FILE_ATTRIBUTE_DIRECTORY) == 0;
#else
struct stat info;
if (stat(real_file_name.data(), &info) != 0) {
LOG(ERROR) << "Failed to run stat on " << real_file_name;
return false;
}
return S_ISREG(info.st_mode);
#endif
std::error_code ec;
return std::filesystem::is_regular_file(real_file_name, ec);
}
std::string File::MakeCallbackFileName(
@ -415,7 +412,7 @@ std::string File::MakeCallbackFileName(
const std::string& name) {
if (name.empty())
return "";
return base::StringPrintf("%s%" PRIdPTR "/%s", kCallbackFilePrefix,
return absl::StrFormat("%s%" PRIdPTR "/%s", kCallbackFilePrefix,
reinterpret_cast<intptr_t>(&callback_params),
name.c_str());
}
@ -426,8 +423,7 @@ bool File::ParseCallbackFileName(const std::string& callback_file_name,
size_t pos = callback_file_name.find("/");
int64_t callback_address = 0;
if (pos == std::string::npos ||
!base::StringToInt64(callback_file_name.substr(0, pos),
&callback_address)) {
!absl::SimpleAtoi(callback_file_name.substr(0, pos), &callback_address)) {
LOG(ERROR) << "Expecting CallbackFile with name like "
"'<callback address>/<entity name>', but seeing "
<< callback_file_name;

View File

@ -1,78 +0,0 @@
# Copyright 2014 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
{
'variables': {
'shaka_code': 1,
},
'targets': [
{
'target_name': 'file',
'type': '<(component)',
'sources': [
'callback_file.cc',
'callback_file.h',
'file.cc',
'file.h',
'file_util.cc',
'file_util.h',
'file_closer.h',
'http_file.cc',
'http_file.h',
'io_cache.cc',
'io_cache.h',
'local_file.cc',
'local_file.h',
'memory_file.cc',
'memory_file.h',
'public/buffer_callback_params.h',
'threaded_io_file.cc',
'threaded_io_file.h',
'udp_file.cc',
'udp_file.h',
'udp_options.cc',
'udp_options.h',
],
'dependencies': [
'../base/base.gyp:base',
'../packager.gyp:status',
'../third_party/gflags/gflags.gyp:gflags',
'../third_party/curl/curl.gyp:libcurl',
'../version/version.gyp:version',
],
'conditions': [
['libpackager_type == "shared_library"', {
'defines': [
'SHARED_LIBRARY_BUILD',
'SHAKA_IMPLEMENTATION',
],
}],
],
},
{
'target_name': 'file_unittest',
'type': '<(gtest_target_type)',
'sources': [
'callback_file_unittest.cc',
'file_unittest.cc',
'file_util_unittest.cc',
'io_cache_unittest.cc',
'memory_file_unittest.cc',
'udp_options_unittest.cc',
'http_file_unittest.cc',
],
'dependencies': [
'../media/test/media_test.gyp:run_tests_with_atexit_manager',
'../testing/gmock.gyp:gmock',
'../testing/gtest.gyp:gtest',
'../third_party/gflags/gflags.gyp:gflags',
'../third_party/curl/curl.gyp:libcurl',
'../version/version.gyp:version',
'file',
],
},
],
}

View File

@ -11,9 +11,9 @@
#include <string>
#include "packager/base/macros.h"
#include "packager/common.h"
#include "packager/file/public/buffer_callback_params.h"
#include "packager/status.h"
#include "packager/status/status.h"
namespace shaka {

View File

@ -7,7 +7,7 @@
#ifndef MEDIA_FILE_FILE_CLOSER_H_
#define MEDIA_FILE_FILE_CLOSER_H_
#include "packager/base/logging.h"
#include "glog/logging.h"
#include "packager/file/file.h"
namespace shaka {

View File

@ -4,22 +4,85 @@
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include <gflags/gflags.h>
#include <gtest/gtest.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include "packager/base/files/file_util.h"
#include <filesystem>
#include "absl/flags/declare.h"
#include "packager/file/file.h"
#include "packager/flag_saver.h"
DECLARE_uint64(io_cache_size);
DECLARE_uint64(io_block_size);
ABSL_DECLARE_FLAG(uint64_t, io_cache_size);
ABSL_DECLARE_FLAG(uint64_t, io_block_size);
namespace {
const int kDataSize = 1024;
// Write a file with standard C library routines.
void WriteFile(const std::string& path, const std::string& data) {
FILE* f = fopen(path.c_str(), "wb");
ASSERT_EQ(data.size(), fwrite(data.data(), 1, data.size(), f));
fclose(f);
}
namespace shaka {
void DeleteFile(const std::string& path) {
std::error_code ec;
std::filesystem::remove(path, ec);
// Ignore errors.
}
using base::FilePath;
int64_t FileSize(const std::string& path) {
std::error_code ec;
int64_t file_size = std::filesystem::file_size(path, ec);
if (ec) {
return -1;
}
return file_size;
}
// Returns num bytes read, up to max_size.
uint64_t ReadFile(const std::string& path,
std::string* data,
uint32_t max_size) {
FILE* f = fopen(path.c_str(), "rb");
if (!f) {
return 0;
}
data->resize(max_size);
uint64_t bytes = fread(data->data(), 1, max_size, f);
data->resize(bytes);
return bytes;
}
std::string generate_unique_temp_path() {
// Generate a unique name for a temporary file, using standard library
// routines, to avoid a circular dependency on any of our own code for
// generating temporary files. The template must end in 6 X's.
std::filesystem::path temp_path_template =
(std::filesystem::temp_directory_path() / "packager-test.XXXXXX");
std::string temp_path_template_string = temp_path_template.string();
#if defined(OS_WIN)
// _mktemp will modify the string passed to it to reflect the generated name
// (replacing the X characters with something else).
_mktemp(temp_path_template_string.data());
#else
// mkstemp will create and open the file, modify the character points to
// reflect the generated name (replacing the X characters with something
// else), and return an open file descriptor. Then we close it and use the
// generated name.
int fd = mkstemp(temp_path_template_string.data());
close(fd);
#endif
return temp_path_template_string;
}
} // namespace
namespace shaka {
class LocalFileTest : public testing::Test {
protected:
@ -28,26 +91,27 @@ class LocalFileTest : public testing::Test {
for (int i = 0; i < kDataSize; ++i)
data_[i] = i % 256;
// Test file path for file_util API.
ASSERT_TRUE(base::CreateTemporaryFile(&test_file_path_));
local_file_name_no_prefix_ = test_file_path_.AsUTF8Unsafe();
local_file_name_no_prefix_ = generate_unique_temp_path();
// Local file name with prefix for File API.
local_file_name_ = kLocalFilePrefix;
local_file_name_ += local_file_name_no_prefix_;
// Use LocalFile directly without ThreadedIoFile.
backup_io_cache_size.reset(new FlagSaver<uint64_t>(&FLAGS_io_cache_size));
absl::SetFlag(&FLAGS_io_cache_size, 0);
}
void TearDown() override {
// Remove test file if created.
base::DeleteFile(FilePath::FromUTF8Unsafe(local_file_name_no_prefix_),
false);
DeleteFile(local_file_name_no_prefix_);
}
std::unique_ptr<FlagSaver<uint64_t>> backup_io_cache_size;
std::string data_;
// Path to the temporary file for this test.
FilePath test_file_path_;
// Same as |test_file_path_| but in string form.
// A path to a temporary test file.
std::string local_file_name_no_prefix_;
// Same as |local_file_name_no_prefix_| but with the file prefix.
@ -56,38 +120,30 @@ class LocalFileTest : public testing::Test {
TEST_F(LocalFileTest, ReadNotExist) {
// Remove test file if it exists.
base::DeleteFile(FilePath::FromUTF8Unsafe(local_file_name_no_prefix_), false);
DeleteFile(local_file_name_no_prefix_);
ASSERT_TRUE(File::Open(local_file_name_.c_str(), "r") == NULL);
}
TEST_F(LocalFileTest, Size) {
ASSERT_EQ(kDataSize,
base::WriteFile(test_file_path_, data_.data(), kDataSize));
WriteFile(local_file_name_no_prefix_, data_);
ASSERT_EQ(kDataSize, File::GetFileSize(local_file_name_.c_str()));
}
TEST_F(LocalFileTest, Copy) {
ASSERT_EQ(kDataSize,
base::WriteFile(test_file_path_, data_.data(), kDataSize));
WriteFile(local_file_name_no_prefix_, data_);
FilePath temp_dir;
ASSERT_TRUE(base::CreateNewTempDirectory(FilePath::StringType(), &temp_dir));
std::string destination = generate_unique_temp_path();
ASSERT_TRUE(File::Copy(local_file_name_.c_str(), destination.c_str()));
// Copy the test file to temp dir as filename "a".
FilePath destination = temp_dir.Append(FilePath::FromUTF8Unsafe("a"));
ASSERT_TRUE(File::Copy(
FilePath::FromUTF8Unsafe(local_file_name_).AsUTF8Unsafe().c_str(),
destination.AsUTF8Unsafe().c_str()));
ASSERT_EQ(kDataSize, FileSize(destination));
// Make a buffer bigger than the expected file content size to make sure that
// there isn't extra stuff appended.
char copied_file_content_buffer[kDataSize * 2] = {};
ASSERT_EQ(kDataSize, base::ReadFile(destination, copied_file_content_buffer,
arraysize(copied_file_content_buffer)));
// Try to read twice as much data as expected, to make sure that there isn't
// extra stuff appended.
std::string read_data;
ASSERT_EQ(kDataSize, ReadFile(destination, &read_data, kDataSize * 2));
ASSERT_EQ(data_, read_data);
ASSERT_EQ(data_, std::string(copied_file_content_buffer, kDataSize));
base::DeleteFile(temp_dir, true);
DeleteFile(destination);
}
TEST_F(LocalFileTest, Write) {
@ -98,19 +154,17 @@ TEST_F(LocalFileTest, Write) {
EXPECT_EQ(kDataSize, file->Size());
EXPECT_TRUE(file->Close());
// Read file using file_util API.
std::string read_data(kDataSize, 0);
std::string read_data;
ASSERT_EQ(kDataSize, FileSize(local_file_name_no_prefix_));
ASSERT_EQ(kDataSize,
base::ReadFile(test_file_path_, &read_data[0], kDataSize));
ReadFile(local_file_name_no_prefix_, &read_data, kDataSize));
// Compare data written and read.
EXPECT_EQ(data_, read_data);
}
TEST_F(LocalFileTest, Read_And_Eof) {
// Write file using file_util API.
ASSERT_EQ(kDataSize,
base::WriteFile(test_file_path_, data_.data(), kDataSize));
WriteFile(local_file_name_no_prefix_, data_);
// Read file using File API.
File* file = File::Open(local_file_name_.c_str(), "r");
@ -195,9 +249,8 @@ TEST_F(LocalFileTest, WriteFlushCheckSize) {
}
}
TEST_F(LocalFileTest, IsLocalReguar) {
ASSERT_EQ(kDataSize,
base::WriteFile(test_file_path_, data_.data(), kDataSize));
TEST_F(LocalFileTest, IsLocalRegular) {
WriteFile(local_file_name_no_prefix_, data_);
ASSERT_TRUE(File::IsLocalRegularFile(local_file_name_.c_str()));
}
@ -209,9 +262,10 @@ TEST_P(ParamLocalFileTest, SeekWriteAndSeekRead) {
const uint32_t kInitialWriteSize(100);
const uint32_t kFinalFileSize(200);
google::FlagSaver flag_saver;
FLAGS_io_block_size = kBlockSize;
FLAGS_io_cache_size = GetParam();
FlagSaver local_backup_io_block_size(&FLAGS_io_block_size);
FlagSaver local_backup_io_cache_size(&FLAGS_io_cache_size);
absl::SetFlag(&FLAGS_io_block_size, kBlockSize);
absl::SetFlag(&FLAGS_io_cache_size, GetParam());
std::vector<uint8_t> buffer(kInitialWriteSize);
File* file = File::Open(local_file_name_no_prefix_.c_str(), "w");
@ -223,19 +277,32 @@ TEST_P(ParamLocalFileTest, SeekWriteAndSeekRead) {
ASSERT_EQ(kInitialWriteSize, position);
for (uint8_t offset = 0; offset < kFinalFileSize; ++offset) {
// Seek to each offset, check that the position matches.
EXPECT_TRUE(file->Seek(offset));
ASSERT_TRUE(file->Tell(&position));
EXPECT_EQ(offset, position);
// Write two bytes of data at this offset (NULs), check that the position
// was advanced by two bytes.
EXPECT_EQ(2u, file->Write(buffer.data(), 2u));
ASSERT_TRUE(file->Tell(&position));
EXPECT_EQ(offset + 2u, position);
// Seek to the byte right after the original offset (the second NUL we
// wrote), check that the position matches.
++offset;
EXPECT_TRUE(file->Seek(offset));
ASSERT_TRUE(file->Tell(&position));
EXPECT_EQ(offset, position);
// Overwrite the byte at this position, with a value matching the current
// offset, check that the position was advanced by one byte.
EXPECT_EQ(1, file->Write(&offset, 1));
ASSERT_TRUE(file->Tell(&position));
EXPECT_EQ(offset + 1u, position);
// The pattern in bytes will be:
// 0x00, 0x01, 0x00, 0x03, 0x00, 0x05, ...
}
EXPECT_EQ(kFinalFileSize, file->Size());
ASSERT_TRUE(file->Close());
@ -244,67 +311,33 @@ TEST_P(ParamLocalFileTest, SeekWriteAndSeekRead) {
ASSERT_TRUE(file != nullptr);
for (uint8_t offset = 1; offset < kFinalFileSize; offset += 2) {
uint8_t read_byte;
// Seek to the odd bytes, which should have values matching their offsets.
EXPECT_TRUE(file->Seek(offset));
ASSERT_TRUE(file->Tell(&position));
EXPECT_EQ(offset, position);
// Read a byte, check that the position was advanced by one byte, and that
// the value matches what we wrote in the loop above (the offset).
EXPECT_EQ(1, file->Read(&read_byte, 1));
ASSERT_TRUE(file->Tell(&position));
EXPECT_EQ(offset + 1u, position);
EXPECT_EQ(offset, read_byte);
}
// We can't read any more at this position (the end).
EXPECT_EQ(0, file->Read(buffer.data(), 1));
// If we seek back to 0, we can read another byte.
ASSERT_TRUE(file->Seek(0));
EXPECT_EQ(1, file->Read(buffer.data(), 1));
EXPECT_TRUE(file->Close());
}
INSTANTIATE_TEST_CASE_P(TestSeekWithDifferentCacheSizes,
INSTANTIATE_TEST_SUITE_P(TestSeekWithDifferentCacheSizes,
ParamLocalFileTest,
::testing::Values(20u, 1000u));
// This test should only be enabled for filesystems which do not allow seeking
// past EOF.
TEST_F(LocalFileTest, DISABLED_WriteSeekOutOfBounds) {
const uint32_t kFileSize(100);
std::vector<uint8_t> buffer(kFileSize);
File* file = File::Open(local_file_name_no_prefix_.c_str(), "w");
ASSERT_TRUE(file != nullptr);
ASSERT_EQ(kFileSize, file->Write(buffer.data(), kFileSize));
ASSERT_EQ(kFileSize, file->Size());
EXPECT_FALSE(file->Seek(kFileSize + 1));
EXPECT_TRUE(file->Seek(kFileSize));
EXPECT_EQ(1, file->Write(buffer.data(), 1));
EXPECT_TRUE(file->Seek(kFileSize + 1));
EXPECT_EQ(kFileSize + 1, file->Size());
}
// This test should only be enabled for filesystems which do not allow seeking
// past EOF.
TEST_F(LocalFileTest, DISABLED_ReadSeekOutOfBounds) {
const uint32_t kFileSize(100);
File::Delete(local_file_name_no_prefix_.c_str());
std::vector<uint8_t> buffer(kFileSize);
File* file = File::Open(local_file_name_no_prefix_.c_str(), "w");
ASSERT_TRUE(file != nullptr);
ASSERT_EQ(kFileSize, file->Write(buffer.data(), kFileSize));
ASSERT_EQ(kFileSize, file->Size());
ASSERT_TRUE(file->Close());
file = File::Open(local_file_name_no_prefix_.c_str(), "r");
ASSERT_TRUE(file != nullptr);
EXPECT_FALSE(file->Seek(kFileSize + 1));
EXPECT_TRUE(file->Seek(kFileSize));
uint64_t position;
EXPECT_TRUE(file->Tell(&position));
EXPECT_EQ(kFileSize, position);
EXPECT_EQ(0u, file->Read(buffer.data(), 1));
EXPECT_TRUE(file->Seek(0));
EXPECT_TRUE(file->Tell(&position));
EXPECT_EQ(0u, position);
EXPECT_EQ(kFileSize, file->Read(buffer.data(), kFileSize));
EXPECT_EQ(0u, file->Read(buffer.data(), 1));
EXPECT_TRUE(file->Close());
}
// 0 disables cache, 20 is small, 61 is prime, and 1000
// is just under the data size of 1k.
::testing::Values(0u, 20u, 61u, 1000u));
TEST(FileTest, MakeCallbackFileName) {
const BufferCallbackParams* params =

View File

@ -8,47 +8,43 @@
#include <inttypes.h>
#include "packager/base/files/file_path.h"
#include "packager/base/files/file_util.h"
#include "packager/base/process/process_handle.h"
#include "packager/base/strings/stringprintf.h"
#include "packager/base/threading/platform_thread.h"
#include "packager/base/time/time.h"
#if defined(OS_WIN)
#include <windows.h>
#else
#include <unistd.h>
#endif
#include <filesystem>
#include <thread>
#include "absl/strings/str_format.h"
namespace shaka {
namespace {
// Create a temp file name using process id, thread id and current time.
std::string TempFileName() {
const int32_t process_id = static_cast<int32_t>(base::GetCurrentProcId());
const int32_t thread_id =
static_cast<int32_t>(base::PlatformThread::CurrentId());
#if defined(OS_WIN)
const uint32_t process_id = static_cast<uint32_t>(GetCurrentProcessId());
#else
const uint32_t process_id = static_cast<uint32_t>(getpid());
#endif
const size_t thread_id =
std::hash<std::thread::id>{}(std::this_thread::get_id());
// We may need two or more temporary files in the same thread. There might be
// name collision if they are requested around the same time, e.g. called
// consecutively. Use a thread_local instance to avoid that.
static thread_local int32_t instance_id = 0;
static thread_local uint32_t instance_id = 0;
++instance_id;
const int64_t current_time = base::Time::Now().ToInternalValue();
return base::StringPrintf("packager-tempfile-%x-%x-%x-%" PRIx64, process_id,
thread_id, instance_id, current_time);
return absl::StrFormat("packager-tempfile-%x-%zx-%x", process_id, thread_id,
instance_id);
}
} // namespace
bool TempFilePath(const std::string& temp_dir, std::string* temp_file_path) {
if (temp_dir.empty()) {
base::FilePath file_path;
if (!base::CreateTemporaryFile(&file_path)) {
LOG(ERROR) << "Failed to create temporary file.";
return false;
}
*temp_file_path = file_path.AsUTF8Unsafe();
} else {
*temp_file_path =
base::FilePath::FromUTF8Unsafe(temp_dir)
.Append(base::FilePath::FromUTF8Unsafe(TempFileName()))
.AsUTF8Unsafe();
}
std::filesystem::path temp_dir_path(temp_dir);
*temp_file_path = (temp_dir_path / TempFileName()).string();
return true;
}

View File

@ -8,7 +8,7 @@
#include <gtest/gtest.h>
#include "packager/base/logging.h"
#include "glog/logging.h"
namespace shaka {

View File

@ -7,36 +7,45 @@
#include "packager/file/http_file.h"
#include <curl/curl.h>
#include <gflags/gflags.h>
#include "packager/base/bind.h"
#include "packager/base/files/file_util.h"
#include "packager/base/logging.h"
#include "packager/base/strings/string_number_conversions.h"
#include "packager/base/strings/stringprintf.h"
#include "packager/base/threading/worker_pool.h"
#include "absl/flags/declare.h"
#include "absl/flags/flag.h"
#include "absl/strings/escaping.h"
#include "absl/strings/str_format.h"
#include "glog/logging.h"
#include "packager/common.h"
#include "packager/file/thread_pool.h"
#include "packager/version/version.h"
DEFINE_string(user_agent, "",
ABSL_FLAG(std::string,
user_agent,
"",
"Set a custom User-Agent string for HTTP requests.");
DEFINE_string(ca_file,
ABSL_FLAG(std::string,
ca_file,
"",
"Absolute path to the Certificate Authority file for the "
"server cert. PEM format");
DEFINE_string(client_cert_file,
ABSL_FLAG(std::string,
client_cert_file,
"",
"Absolute path to client certificate file.");
DEFINE_string(client_cert_private_key_file,
ABSL_FLAG(std::string,
client_cert_private_key_file,
"",
"Absolute path to the Private Key file.");
DEFINE_string(client_cert_private_key_password,
ABSL_FLAG(std::string,
client_cert_private_key_password,
"",
"Password to the private key file.");
DEFINE_bool(disable_peer_verification,
ABSL_FLAG(bool,
disable_peer_verification,
false,
"Disable peer verification. This is needed to talk to servers "
"without valid certificates.");
DECLARE_uint64(io_cache_size);
ABSL_DECLARE_FLAG(uint64_t, io_cache_size);
namespace shaka {
@ -114,11 +123,12 @@ int CurlDebugCallback(CURL* /* handle */,
return 0;
}
const std::string data_string(data, size);
VLOG(log_level) << "\n\n"
<< type_text << " (0x" << std::hex << size << std::dec
<< " bytes)\n"
<< (in_hex ? base::HexEncode(data, size)
: std::string(data, size));
<< (in_hex ? absl::BytesToHexString(data_string)
: data_string);
return 0;
}
@ -163,13 +173,17 @@ HttpFile::HttpFile(HttpMethod method,
upload_content_type_(upload_content_type),
timeout_in_seconds_(timeout_in_seconds),
method_(method),
download_cache_(FLAGS_io_cache_size),
upload_cache_(FLAGS_io_cache_size),
download_cache_(absl::GetFlag(FLAGS_io_cache_size)),
upload_cache_(absl::GetFlag(FLAGS_io_cache_size)),
curl_(curl_easy_init()),
status_(Status::OK),
user_agent_(FLAGS_user_agent),
task_exit_event_(base::WaitableEvent::ResetPolicy::MANUAL,
base::WaitableEvent::InitialState::NOT_SIGNALED) {
user_agent_(absl::GetFlag(FLAGS_user_agent)),
ca_file_(absl::GetFlag(FLAGS_ca_file)),
client_cert_file_(absl::GetFlag(FLAGS_client_cert_file)),
client_cert_private_key_file_(
absl::GetFlag(FLAGS_client_cert_private_key_file)),
client_cert_private_key_password_(
absl::GetFlag(FLAGS_client_cert_private_key_password)) {
static LibCurlInitializer lib_curl_initializer;
if (user_agent_.empty()) {
user_agent_ += "ShakaPackager/" + GetPackagerVersion();
@ -212,9 +226,7 @@ bool HttpFile::Open() {
// TODO: Implement retrying with exponential backoff, see
// "widevine_key_source.cc"
base::WorkerPool::PostTask(
FROM_HERE, base::Bind(&HttpFile::ThreadMain, base::Unretained(this)),
/* task_is_slow= */ true);
ThreadPool::instance.PostTask(std::bind(&HttpFile::ThreadMain, this));
return true;
}
@ -225,7 +237,7 @@ Status HttpFile::CloseWithStatus() {
// will wait for more data forever.
download_cache_.Close();
upload_cache_.Close();
task_exit_event_.Wait();
task_exit_event_.WaitForNotification();
const Status result = status_;
LOG_IF(ERROR, !result.ok()) << "HttpFile request failed: " << result;
@ -258,11 +270,13 @@ bool HttpFile::Flush() {
}
bool HttpFile::Seek(uint64_t position) {
UNUSED(position);
LOG(ERROR) << "HttpFile does not support Seek().";
return false;
}
bool HttpFile::Tell(uint64_t* position) {
UNUSED(position);
LOG(ERROR) << "HttpFile does not support Tell().";
return false;
}
@ -304,25 +318,24 @@ void HttpFile::SetupRequest() {
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, request_headers_.get());
if (FLAGS_disable_peer_verification)
if (absl::GetFlag(FLAGS_disable_peer_verification))
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
// Client authentication
if (!FLAGS_client_cert_private_key_file.empty() &&
!FLAGS_client_cert_file.empty()) {
if (!client_cert_private_key_file_.empty() && !client_cert_file_.empty()) {
curl_easy_setopt(curl, CURLOPT_SSLKEY,
FLAGS_client_cert_private_key_file.data());
curl_easy_setopt(curl, CURLOPT_SSLCERT, FLAGS_client_cert_file.data());
client_cert_private_key_file_.data());
curl_easy_setopt(curl, CURLOPT_SSLCERT, client_cert_file_.data());
curl_easy_setopt(curl, CURLOPT_SSLKEYTYPE, "PEM");
curl_easy_setopt(curl, CURLOPT_SSLCERTTYPE, "PEM");
if (!FLAGS_client_cert_private_key_password.empty()) {
if (!client_cert_private_key_password_.empty()) {
curl_easy_setopt(curl, CURLOPT_KEYPASSWD,
FLAGS_client_cert_private_key_password.data());
client_cert_private_key_password_.data());
}
}
if (!FLAGS_ca_file.empty()) {
curl_easy_setopt(curl, CURLOPT_CAINFO, FLAGS_ca_file.data());
if (!ca_file_.empty()) {
curl_easy_setopt(curl, CURLOPT_CAINFO, ca_file_.data());
}
if (VLOG_IS_ON(kMinLogLevelForCurlDebugFunction)) {
@ -340,8 +353,7 @@ void HttpFile::ThreadMain() {
if (res == CURLE_HTTP_RETURNED_ERROR) {
long response_code = 0;
curl_easy_getinfo(curl_.get(), CURLINFO_RESPONSE_CODE, &response_code);
error_message +=
base::StringPrintf(", response code: %ld.", response_code);
error_message += absl::StrFormat(", response code: %ld.", response_code);
}
status_ = Status(
@ -350,7 +362,7 @@ void HttpFile::ThreadMain() {
}
download_cache_.Close();
task_exit_event_.Signal();
task_exit_event_.Notify();
}
} // namespace shaka

View File

@ -10,10 +10,10 @@
#include <memory>
#include <string>
#include "packager/base/synchronization/waitable_event.h"
#include "absl/synchronization/notification.h"
#include "packager/file/file.h"
#include "packager/file/io_cache.h"
#include "packager/status.h"
typedef void CURL;
struct curl_slist;
@ -84,9 +84,13 @@ class HttpFile : public File {
std::unique_ptr<curl_slist, CurlDelete> request_headers_;
Status status_;
std::string user_agent_;
std::string ca_file_;
std::string client_cert_file_;
std::string client_cert_private_key_file_;
std::string client_cert_private_key_password_;
// Signaled when the "curl easy perform" task completes.
base::WaitableEvent task_exit_event_;
absl::Notification task_exit_event_;
};
} // namespace shaka

View File

@ -11,17 +11,13 @@
#include <memory>
#include <vector>
#include "packager/base/json/json_reader.h"
#include "packager/base/values.h"
#include "absl/strings/str_split.h"
#include "nlohmann/json.hpp"
#include "packager/file/file.h"
#include "packager/file/file_closer.h"
#define ASSERT_JSON_STRING(json, key, value) \
do { \
std::string actual; \
ASSERT_TRUE((json)->GetString((key), &actual)); \
ASSERT_EQ(actual, (value)); \
} while (false)
ASSERT_EQ(GetJsonString((json), (key)), (value)) << "JSON is " << (json)
namespace shaka {
@ -29,7 +25,27 @@ namespace {
using FilePtr = std::unique_ptr<HttpFile, FileCloser>;
std::unique_ptr<base::DictionaryValue> HandleResponse(const FilePtr& file) {
// Handles keys with dots, indicating a nested field.
std::string GetJsonString(const nlohmann::json& json,
const std::string& combined_key) {
std::vector<std::string> keys = absl::StrSplit(combined_key, '.');
nlohmann::json current = json;
for (const std::string& key : keys) {
if (!current.contains(key)) {
return "";
}
current = current[key];
}
if (current.is_string()) {
return current.get<std::string>();
}
return "";
}
nlohmann::json HandleResponse(const FilePtr& file) {
std::string result;
while (true) {
char buffer[64 * 1024];
@ -42,27 +58,26 @@ std::unique_ptr<base::DictionaryValue> HandleResponse(const FilePtr& file) {
}
VLOG(1) << "Response:\n" << result;
auto value = base::JSONReader::Read(result);
if (!value || !value->IsType(base::Value::TYPE_DICTIONARY))
return nullptr;
return std::unique_ptr<base::DictionaryValue>{
static_cast<base::DictionaryValue*>(value.release())};
nlohmann::json value = nlohmann::json::parse(result,
/* parser callback */ nullptr,
/* allow exceptions */ false);
return value;
}
} // namespace
TEST(HttpFileTest, DISABLED_BasicGet) {
TEST(HttpFileTest, BasicGet) {
FilePtr file(new HttpFile(HttpMethod::kGet, "https://httpbin.org/anything"));
ASSERT_TRUE(file);
ASSERT_TRUE(file->Open());
auto json = HandleResponse(file);
ASSERT_TRUE(json);
ASSERT_TRUE(json.is_object());
ASSERT_TRUE(file.release()->Close());
ASSERT_JSON_STRING(json, "method", "GET");
}
TEST(HttpFileTest, DISABLED_CustomHeaders) {
TEST(HttpFileTest, CustomHeaders) {
std::vector<std::string> headers{"Host: foo", "X-My-Header: Something"};
FilePtr file(new HttpFile(HttpMethod::kGet, "https://httpbin.org/anything",
"", headers, 0));
@ -70,7 +85,7 @@ TEST(HttpFileTest, DISABLED_CustomHeaders) {
ASSERT_TRUE(file->Open());
auto json = HandleResponse(file);
ASSERT_TRUE(json);
ASSERT_TRUE(json.is_object());
ASSERT_TRUE(file.release()->Close());
ASSERT_JSON_STRING(json, "method", "GET");
@ -78,7 +93,7 @@ TEST(HttpFileTest, DISABLED_CustomHeaders) {
ASSERT_JSON_STRING(json, "headers.X-My-Header", "Something");
}
TEST(HttpFileTest, DISABLED_BasicPost) {
TEST(HttpFileTest, BasicPost) {
FilePtr file(new HttpFile(HttpMethod::kPost, "https://httpbin.org/anything"));
ASSERT_TRUE(file);
ASSERT_TRUE(file->Open());
@ -90,17 +105,25 @@ TEST(HttpFileTest, DISABLED_BasicPost) {
ASSERT_TRUE(file->Flush());
auto json = HandleResponse(file);
ASSERT_TRUE(json);
ASSERT_TRUE(json.is_object());
ASSERT_TRUE(file.release()->Close());
ASSERT_JSON_STRING(json, "method", "POST");
ASSERT_JSON_STRING(json, "data", data);
ASSERT_JSON_STRING(json, "headers.Content-Type", "application/octet-stream");
// Curl may choose to send chunked or not based on the data. We request
// chunked encoding, but don't control if it is actually used. If we get
// chunked transfer, there is no Content-Length header reflected back to us.
if (!GetJsonString(json, "headers.Content-Length").empty()) {
ASSERT_JSON_STRING(json, "headers.Content-Length",
std::to_string(data.size()));
} else {
ASSERT_JSON_STRING(json, "headers.Transfer-Encoding", "chunked");
}
}
TEST(HttpFileTest, DISABLED_BasicPut) {
TEST(HttpFileTest, BasicPut) {
FilePtr file(new HttpFile(HttpMethod::kPut, "https://httpbin.org/anything"));
ASSERT_TRUE(file);
ASSERT_TRUE(file->Open());
@ -112,17 +135,25 @@ TEST(HttpFileTest, DISABLED_BasicPut) {
ASSERT_TRUE(file->Flush());
auto json = HandleResponse(file);
ASSERT_TRUE(json);
ASSERT_TRUE(json.is_object());
ASSERT_TRUE(file.release()->Close());
ASSERT_JSON_STRING(json, "method", "PUT");
ASSERT_JSON_STRING(json, "data", data);
ASSERT_JSON_STRING(json, "headers.Content-Type", "application/octet-stream");
// Curl may choose to send chunked or not based on the data. We request
// chunked encoding, but don't control if it is actually used. If we get
// chunked transfer, there is no Content-Length header reflected back to us.
if (!GetJsonString(json, "headers.Content-Length").empty()) {
ASSERT_JSON_STRING(json, "headers.Content-Length",
std::to_string(data.size()));
} else {
ASSERT_JSON_STRING(json, "headers.Transfer-Encoding", "chunked");
}
}
TEST(HttpFileTest, DISABLED_MultipleWrites) {
TEST(HttpFileTest, MultipleWrites) {
FilePtr file(new HttpFile(HttpMethod::kPut, "https://httpbin.org/anything"));
ASSERT_TRUE(file);
ASSERT_TRUE(file->Open());
@ -143,22 +174,28 @@ TEST(HttpFileTest, DISABLED_MultipleWrites) {
ASSERT_TRUE(file->Flush());
auto json = HandleResponse(file);
ASSERT_TRUE(json);
ASSERT_TRUE(json.is_object());
ASSERT_TRUE(file.release()->Close());
ASSERT_JSON_STRING(json, "method", "PUT");
ASSERT_JSON_STRING(json, "data", data1 + data2 + data3 + data4);
ASSERT_JSON_STRING(json, "headers.Content-Type", "application/octet-stream");
// Curl may choose to send chunked or not based on the data. We request
// chunked encoding, but don't control if it is actually used. If we get
// chunked transfer, there is no Content-Length header reflected back to us.
if (!GetJsonString(json, "headers.Content-Length").empty()) {
auto totalSize = data1.size() + data2.size() + data3.size() + data4.size();
ASSERT_JSON_STRING(json, "headers.Content-Length",
std::to_string(data1.size() + data2.size() + data3.size() +
data4.size()));
std::to_string(totalSize));
} else {
ASSERT_JSON_STRING(json, "headers.Transfer-Encoding", "chunked");
}
}
// TODO: Test chunked uploads. Since we can only read the response, we have no
// way to detect if we are streaming the upload like we want. httpbin seems to
// populate the Content-Length even if we don't give it in the request.
// TODO: Test chunked uploads explicitly.
TEST(HttpFileTest, DISABLED_Error404) {
TEST(HttpFileTest, Error404) {
FilePtr file(
new HttpFile(HttpMethod::kGet, "https://httpbin.org/status/404"));
ASSERT_TRUE(file);
@ -173,7 +210,7 @@ TEST(HttpFileTest, DISABLED_Error404) {
ASSERT_EQ(status.error_code(), error::HTTP_FAILURE);
}
TEST(HttpFileTest, DISABLED_TimeoutTriggered) {
TEST(HttpFileTest, TimeoutTriggered) {
FilePtr file(
new HttpFile(HttpMethod::kGet, "https://httpbin.org/delay/8", "", {}, 1));
ASSERT_TRUE(file);
@ -188,14 +225,14 @@ TEST(HttpFileTest, DISABLED_TimeoutTriggered) {
ASSERT_EQ(status.error_code(), error::TIME_OUT);
}
TEST(HttpFileTest, DISABLED_TimeoutNotTriggered) {
TEST(HttpFileTest, TimeoutNotTriggered) {
FilePtr file(
new HttpFile(HttpMethod::kGet, "https://httpbin.org/delay/1", "", {}, 5));
ASSERT_TRUE(file);
ASSERT_TRUE(file->Open());
auto json = HandleResponse(file);
ASSERT_TRUE(json);
ASSERT_TRUE(json.is_object());
ASSERT_TRUE(file.release()->Close());
}

View File

@ -10,19 +10,12 @@
#include <algorithm>
#include "packager/base/logging.h"
#include "glog/logging.h"
namespace shaka {
using base::AutoLock;
using base::AutoUnlock;
IoCache::IoCache(uint64_t cache_size)
: cache_size_(cache_size),
read_event_(base::WaitableEvent::ResetPolicy::AUTOMATIC,
base::WaitableEvent::InitialState::NOT_SIGNALED),
write_event_(base::WaitableEvent::ResetPolicy::AUTOMATIC,
base::WaitableEvent::InitialState::NOT_SIGNALED),
// Make the buffer one byte larger than the cache so that when the
// condition r_ptr == w_ptr is unambiguous (buffer empty).
circular_buffer_(cache_size + 1),
@ -38,10 +31,9 @@ IoCache::~IoCache() {
uint64_t IoCache::Read(void* buffer, uint64_t size) {
DCHECK(buffer);
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
while (!closed_ && (BytesCachedInternal() == 0)) {
AutoUnlock unlock(lock_);
write_event_.Wait();
write_event_.Wait(&mutex_);
}
size = std::min(size, BytesCachedInternal());
@ -69,13 +61,12 @@ uint64_t IoCache::Write(const void* buffer, uint64_t size) {
const uint8_t* r_ptr(static_cast<const uint8_t*>(buffer));
uint64_t bytes_left(size);
while (bytes_left) {
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
while (!closed_ && (BytesFreeInternal() == 0)) {
AutoUnlock unlock(lock_);
VLOG(1) << "Circular buffer is full, which can happen if data arrives "
"faster than being consumed by packager. Ignore if it is not "
"live packaging. Otherwise, try increasing --io_cache_size.";
read_event_.Wait();
read_event_.Wait(&mutex_);
}
if (closed_)
return 0;
@ -103,35 +94,33 @@ uint64_t IoCache::Write(const void* buffer, uint64_t size) {
}
void IoCache::Clear() {
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
r_ptr_ = w_ptr_ = circular_buffer_.data();
// Let any writers know that there is room in the cache.
read_event_.Signal();
}
void IoCache::Close() {
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
closed_ = true;
read_event_.Signal();
write_event_.Signal();
}
void IoCache::Reopen() {
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
CHECK(closed_);
r_ptr_ = w_ptr_ = circular_buffer_.data();
closed_ = false;
read_event_.Reset();
write_event_.Reset();
}
uint64_t IoCache::BytesCached() {
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
return BytesCachedInternal();
}
uint64_t IoCache::BytesFree() {
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
return BytesFreeInternal();
}
@ -146,10 +135,9 @@ uint64_t IoCache::BytesFreeInternal() {
}
void IoCache::WaitUntilEmptyOrClosed() {
AutoLock lock(lock_);
absl::MutexLock lock(&mutex_);
while (!closed_ && BytesCachedInternal()) {
AutoUnlock unlock(lock_);
read_event_.Wait();
read_event_.Wait(&mutex_);
}
}

View File

@ -8,10 +8,11 @@
#define PACKAGER_FILE_IO_CACHE_H_
#include <stdint.h>
#include <vector>
#include "packager/base/macros.h"
#include "packager/base/synchronization/lock.h"
#include "packager/base/synchronization/waitable_event.h"
#include "absl/synchronization/mutex.h"
#include "packager/common.h"
namespace shaka {
@ -67,14 +68,14 @@ class IoCache {
uint64_t BytesFreeInternal();
const uint64_t cache_size_;
base::Lock lock_;
base::WaitableEvent read_event_;
base::WaitableEvent write_event_;
std::vector<uint8_t> circular_buffer_;
const uint8_t* end_ptr_;
uint8_t* r_ptr_;
uint8_t* w_ptr_;
bool closed_;
absl::Mutex mutex_;
absl::CondVar read_event_ GUARDED_BY(mutex_);
absl::CondVar write_event_ GUARDED_BY(mutex_);
std::vector<uint8_t> circular_buffer_ GUARDED_BY(mutex_);
const uint8_t* end_ptr_ GUARDED_BY(mutex_);
uint8_t* r_ptr_ GUARDED_BY(mutex_);
uint8_t* w_ptr_ GUARDED_BY(mutex_);
bool closed_ GUARDED_BY(mutex_);
DISALLOW_COPY_AND_ASSIGN(IoCache);
};

View File

@ -5,12 +5,13 @@
// https://developers.google.com/open-source/licenses/bsd
#include "packager/file/io_cache.h"
#include <gtest/gtest.h>
#include <string.h>
#include <algorithm>
#include "packager/base/bind.h"
#include "packager/base/bind_helpers.h"
#include "packager/base/threading/simple_thread.h"
#include <chrono>
#include <thread>
namespace {
const uint64_t kBlockSize = 256;
@ -19,27 +20,11 @@ const uint64_t kCacheSize = 16 * kBlockSize;
namespace shaka {
class ClosureThread : public base::SimpleThread {
public:
ClosureThread(const std::string& name_prefix, const base::Closure& task)
: base::SimpleThread(name_prefix), task_(task) {}
~ClosureThread() {
if (HasBeenStarted() && !HasBeenJoined())
Join();
}
void Run() { task_.Run(); }
private:
const base::Closure task_;
};
class IoCacheTest : public testing::Test {
public:
void WriteToCache(const std::vector<uint8_t>& test_buffer,
uint64_t num_writes,
int sleep_between_writes,
int sleep_between_writes_ms,
bool close_when_done) {
for (uint64_t write_idx = 0; write_idx < num_writes; ++write_idx) {
uint64_t write_result =
@ -50,9 +35,9 @@ class IoCacheTest : public testing::Test {
break;
}
EXPECT_EQ(test_buffer.size(), write_result);
if (sleep_between_writes) {
base::PlatformThread::Sleep(
base::TimeDelta::FromMilliseconds(sleep_between_writes));
if (sleep_between_writes_ms) {
std::this_thread::sleep_for(
std::chrono::milliseconds(sleep_between_writes_ms));
}
}
if (close_when_done)
@ -62,7 +47,7 @@ class IoCacheTest : public testing::Test {
protected:
void SetUp() override {
for (unsigned int idx = 0; idx < kBlockSize; ++idx)
reference_block_[idx] = idx;
reference_block_[idx] = idx & 0xff;
cache_.reset(new IoCache(kCacheSize));
cache_closed_ = false;
}
@ -82,25 +67,22 @@ class IoCacheTest : public testing::Test {
void WriteToCacheThreaded(const std::vector<uint8_t>& test_buffer,
uint64_t num_writes,
int sleep_between_writes,
int sleep_between_writes_ms,
bool close_when_done) {
writer_thread_.reset(new ClosureThread(
"WriterThread",
base::Bind(&IoCacheTest::WriteToCache, base::Unretained(this),
test_buffer, num_writes, sleep_between_writes,
close_when_done)));
writer_thread_->Start();
writer_thread_.reset(new std::thread(
std::bind(&IoCacheTest::WriteToCache, this, test_buffer, num_writes,
sleep_between_writes_ms, close_when_done)));
}
void WaitForWriterThread() {
if (writer_thread_) {
writer_thread_->Join();
writer_thread_->join();
writer_thread_.reset();
}
}
std::unique_ptr<IoCache> cache_;
std::unique_ptr<ClosureThread> writer_thread_;
std::unique_ptr<std::thread> writer_thread_;
uint8_t reference_block_[kBlockSize];
bool cache_closed_;
};
@ -186,8 +168,7 @@ TEST_F(IoCacheTest, SlowRead) {
std::vector<uint8_t> read_buffer(kBlockSize);
EXPECT_EQ(kBlockSize, cache_->Read(read_buffer.data(), kBlockSize));
EXPECT_EQ(write_buffer, read_buffer);
base::PlatformThread::Sleep(
base::TimeDelta::FromMilliseconds(kReadDelayMs));
std::this_thread::sleep_for(std::chrono::milliseconds(kReadDelayMs));
}
}
@ -198,7 +179,7 @@ TEST_F(IoCacheTest, CloseByReader) {
GenerateTestBuffer(kBlockSize, &write_buffer);
WriteToCacheThreaded(write_buffer, kNumWrites, 0, false);
while (cache_->BytesCached() < kCacheSize) {
base::PlatformThread::Sleep(base::TimeDelta::FromMilliseconds(10));
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
cache_->Close();
WaitForWriterThread();
@ -264,7 +245,7 @@ TEST_F(IoCacheTest, LargeRead) {
write_buffer.end());
}
while (cache_->BytesCached() < kCacheSize) {
base::PlatformThread::Sleep(base::TimeDelta::FromMilliseconds(10));
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
std::vector<uint8_t> read_buffer(kCacheSize);
EXPECT_EQ(kCacheSize, cache_->Read(read_buffer.data(), kCacheSize));

View File

@ -7,90 +7,18 @@
#include "packager/file/local_file.h"
#include <stdio.h>
#if defined(OS_WIN)
#include <windows.h>
#else
#include <sys/stat.h>
#endif // defined(OS_WIN)
#include "packager/base/files/file_path.h"
#include "packager/base/files/file_util.h"
#include "packager/base/logging.h"
#include <filesystem>
#include "glog/logging.h"
namespace shaka {
namespace {
// Check if the directory |path| exists. Returns false if it does not exist or
// it is not a directory. On non-Windows, |mode| will be filled with the file
// permission bits on success.
bool DirectoryExists(const base::FilePath& path, int* mode) {
#if defined(OS_WIN)
DWORD fileattr = GetFileAttributes(path.value().c_str());
if (fileattr != INVALID_FILE_ATTRIBUTES)
return (fileattr & FILE_ATTRIBUTE_DIRECTORY) != 0;
#else
struct stat info;
if (stat(path.value().c_str(), &info) != 0)
return false;
if (S_ISDIR(info.st_mode)) {
const int FILE_PERMISSION_MASK = S_IRWXU | S_IRWXG | S_IRWXO;
if (mode)
*mode = info.st_mode & FILE_PERMISSION_MASK;
return true;
}
#endif
return false;
}
// Create all the inexistent directories in the path. Returns true on success or
// if the directory already exists.
bool CreateDirectory(const base::FilePath& full_path) {
std::vector<base::FilePath> subpaths;
// Collect a list of all parent directories.
base::FilePath last_path = full_path;
subpaths.push_back(full_path);
for (base::FilePath path = full_path.DirName();
path.value() != last_path.value(); path = path.DirName()) {
subpaths.push_back(path);
last_path = path;
}
// For non-Windows only. File permission for the new directories.
// The file permission will be inherited from the last existing directory in
// the file path. If none of the directory exists in the path, it is set to
// 0755 by default.
int mode = 0755;
// Iterate through the parents and create the missing ones.
for (auto i = subpaths.rbegin(); i != subpaths.rend(); ++i) {
if (DirectoryExists(*i, &mode)) {
continue;
}
#if defined(OS_WIN)
if (::CreateDirectory(i->value().c_str(), nullptr)) {
continue;
}
#else
if (mkdir(i->value().c_str(), mode) == 0) {
continue;
}
#endif
// Mkdir failed, but it might have failed with EEXIST, or some other error
// due to the the directory appearing out of thin air. This can occur if
// two processes are trying to create the same file system tree at the same
// time. Check to see if it exists and make sure it is a directory.
const auto saved_error_code = ::logging::GetLastSystemErrorCode();
if (!DirectoryExists(*i, nullptr)) {
LOG(ERROR) << "Failed to create directory " << i->value().c_str()
<< " ErrorCode " << saved_error_code;
return false;
}
}
return true;
}
} // namespace
// Always open files in binary mode.
const char kAdditionalFileMode[] = "b";
@ -104,7 +32,7 @@ LocalFile::LocalFile(const char* file_name, const char* mode)
bool LocalFile::Close() {
bool result = true;
if (internal_file_) {
result = base::CloseFile(internal_file_);
result = fclose(internal_file_) == 0;
internal_file_ = NULL;
}
delete this;
@ -144,10 +72,10 @@ int64_t LocalFile::Size() {
return -1;
}
int64_t file_size;
if (!base::GetFileSize(base::FilePath::FromUTF8Unsafe(file_name()),
&file_size)) {
LOG(ERROR) << "Cannot get file size.";
std::error_code ec;
int64_t file_size = std::filesystem::file_size(file_name(), ec);
if (ec) {
LOG(ERROR) << "Cannot get file size, error: " << ec;
return -1;
}
return file_size;
@ -182,22 +110,30 @@ bool LocalFile::Tell(uint64_t* position) {
LocalFile::~LocalFile() {}
bool LocalFile::Open() {
base::FilePath file_path(base::FilePath::FromUTF8Unsafe(file_name()));
std::filesystem::path file_path(file_name());
// Create upper level directories for write mode.
if (file_mode_.find("w") != std::string::npos) {
// The function returns true if the directories already exist.
if (!shaka::CreateDirectory(file_path.DirName())) {
// From the return value of filesystem::create_directories, you can't tell
// the difference between pre-existing directories and failure. So check
// first if it needs to be created.
auto parent_path = file_path.parent_path();
std::error_code ec;
if (!std::filesystem::is_directory(parent_path, ec)) {
if (!std::filesystem::create_directories(parent_path, ec)) {
return false;
}
}
}
internal_file_ = base::OpenFile(file_path, file_mode_.c_str());
internal_file_ = fopen(file_path.u8string().c_str(), file_mode_.c_str());
return (internal_file_ != NULL);
}
bool LocalFile::Delete(const char* file_name) {
return base::DeleteFile(base::FilePath::FromUTF8Unsafe(file_name), false);
std::error_code ec;
// On error (ec truthy), remove() will return false anyway.
return std::filesystem::remove(file_name, ec);
}
} // namespace shaka

View File

@ -11,7 +11,6 @@
#include <string>
#include "packager/base/compiler_specific.h"
#include "packager/file/file.h"
namespace shaka {

View File

@ -13,8 +13,8 @@
#include <memory>
#include <set>
#include "packager/base/logging.h"
#include "packager/base/synchronization/lock.h"
#include "absl/synchronization/mutex.h"
#include "glog/logging.h"
namespace shaka {
namespace {
@ -30,7 +30,7 @@ class FileSystem {
}
void Delete(const std::string& file_name) {
base::AutoLock auto_lock(lock_);
absl::MutexLock auto_lock(&mutex_);
if (open_files_.find(file_name) != open_files_.end()) {
LOG(ERROR) << "File '" << file_name
@ -43,7 +43,7 @@ class FileSystem {
}
void DeleteAll() {
base::AutoLock auto_lock(lock_);
absl::MutexLock auto_lock(&mutex_);
if (!open_files_.empty()) {
LOG(ERROR) << "There are still files open. Deleting an open MemoryFile "
"is not allowed. Exit without deleting the file.";
@ -54,12 +54,12 @@ class FileSystem {
std::vector<uint8_t>* Open(const std::string& file_name,
const std::string& mode) {
base::AutoLock auto_lock(lock_);
absl::MutexLock auto_lock(&mutex_);
if (open_files_.find(file_name) != open_files_.end()) {
NOTIMPLEMENTED() << "File '" << file_name
<< "' is already open. MemoryFile does not support "
"open the same file before it is closed.";
"opening the same file before it is closed.";
return nullptr;
}
@ -81,7 +81,7 @@ class FileSystem {
}
bool Close(const std::string& file_name) {
base::AutoLock auto_lock(lock_);
absl::MutexLock auto_lock(&mutex_);
auto iter = open_files_.find(file_name);
if (iter == open_files_.end()) {
@ -101,11 +101,11 @@ class FileSystem {
FileSystem() = default;
// Filename to file data map.
std::map<std::string, std::vector<uint8_t>> files_;
std::map<std::string, std::vector<uint8_t>> files_ GUARDED_BY(mutex_);
// Filename to file open modes map.
std::map<std::string, std::string> open_files_;
std::map<std::string, std::string> open_files_ GUARDED_BY(mutex_);
base::Lock lock_;
absl::Mutex mutex_;
};
} // namespace

View File

@ -3,8 +3,10 @@
// found in the LICENSE file.
#include "packager/file/memory_file.h"
#include <gtest/gtest.h>
#include <memory>
#include "packager/file/file.h"
#include "packager/file/file_closer.h"

View File

@ -0,0 +1,108 @@
// Copyright 2022 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/file/thread_pool.h"
#include <thread>
#include "absl/time/time.h"
#include "glog/logging.h"
namespace shaka {
namespace {
const absl::Duration kMaxThreadIdleTime = absl::Minutes(10);
} // namespace
// static
ThreadPool ThreadPool::instance;
ThreadPool::ThreadPool() : num_idle_threads_(0), terminated_(false) {}
ThreadPool::~ThreadPool() {
Terminate();
}
void ThreadPool::PostTask(const std::function<void()>& task) {
absl::MutexLock lock(&mutex_);
DCHECK(!terminated_) << "Should not call PostTask after Terminate!";
if (terminated_) {
return;
}
// An empty task is used internally to signal the thread to terminate. This
// should never be sent on input.
if (!task) {
DLOG(ERROR) << "Should not post an empty task!";
return;
}
tasks_.push(std::move(task));
if (num_idle_threads_ >= tasks_.size()) {
// We have enough threads available.
tasks_available_.SignalAll();
} else {
// We need to start an additional thread.
std::thread thread(std::bind(&ThreadPool::ThreadMain, this));
thread.detach();
}
}
void ThreadPool::Terminate() {
{
absl::MutexLock lock(&mutex_);
terminated_ = true;
while (!tasks_.empty()) {
tasks_.pop();
}
}
tasks_available_.SignalAll();
}
ThreadPool::Task ThreadPool::WaitForTask() {
absl::MutexLock lock(&mutex_);
if (terminated_) {
// The pool is terminated. Terminate this thread.
return Task();
}
if (tasks_.empty()) {
num_idle_threads_++;
// Wait for a task, up to the maximum idle time.
tasks_available_.WaitWithTimeout(&mutex_, kMaxThreadIdleTime);
num_idle_threads_--;
if (tasks_.empty()) {
// No work before the timeout. Terminate this thread.
return Task();
}
}
// Get the next task from the queue.
Task task = tasks_.front();
tasks_.pop();
return task;
}
void ThreadPool::ThreadMain() {
while (true) {
auto task = WaitForTask();
if (!task) {
// An empty task signals the thread to terminate.
return;
}
// Run the task, then loop to wait for another.
task();
}
}
} // namespace shaka

View File

@ -0,0 +1,56 @@
// Copyright 2022 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_FILE_THREAD_POOL_H_
#define PACKAGER_FILE_THREAD_POOL_H_
#include "packager/common.h"
#include <functional>
#include <queue>
#include "absl/base/thread_annotations.h"
#include "absl/synchronization/mutex.h"
namespace shaka {
/// A simple thread pool. We used to get this from Chromium base::, but there
/// is no replacement in the C++ standard library nor in absl.
/// (As of June 2022.) The pool will grow when there are no threads available
/// to handle a task, and it will shrink when a thread is idle for too long.
class ThreadPool {
public:
typedef std::function<void()> Task;
ThreadPool();
~ThreadPool();
/// Find or spawn a worker thread to handle |task|.
/// @param task A potentially long-running task to be handled by the pool.
void PostTask(const Task& task);
static ThreadPool instance;
private:
/// Stop handing out tasks to workers, wake up all threads, and make them
/// exit.
void Terminate();
Task WaitForTask();
void ThreadMain();
absl::Mutex mutex_;
absl::CondVar tasks_available_ GUARDED_BY(mutex_);
std::queue<Task> tasks_ GUARDED_BY(mutex_);
size_t num_idle_threads_ GUARDED_BY(mutex_);
bool terminated_ GUARDED_BY(mutex_);
DISALLOW_COPY_AND_ASSIGN(ThreadPool);
};
} // namespace shaka
#endif // PACKAGER_FILE_THREAD_POOL_H_

View File

@ -6,10 +6,7 @@
#include "packager/file/threaded_io_file.h"
#include "packager/base/bind.h"
#include "packager/base/bind_helpers.h"
#include "packager/base/location.h"
#include "packager/base/threading/worker_pool.h"
#include "packager/file/thread_pool.h"
namespace shaka {
@ -25,12 +22,10 @@ ThreadedIoFile::ThreadedIoFile(std::unique_ptr<File, FileCloser> internal_file,
position_(0),
size_(0),
eof_(false),
flushing_(false),
flush_complete_event_(base::WaitableEvent::ResetPolicy::AUTOMATIC,
base::WaitableEvent::InitialState::NOT_SIGNALED),
internal_file_error_(0),
task_exit_event_(base::WaitableEvent::ResetPolicy::AUTOMATIC,
base::WaitableEvent::InitialState::NOT_SIGNALED) {
flushing_(false),
flush_complete_(false),
task_exited_(false) {
DCHECK(internal_file_);
}
@ -45,10 +40,7 @@ bool ThreadedIoFile::Open() {
position_ = 0;
size_ = internal_file_->Size();
base::WorkerPool::PostTask(
FROM_HERE,
base::Bind(&ThreadedIoFile::TaskHandler, base::Unretained(this)),
true /* task_is_slow */);
ThreadPool::instance.PostTask(std::bind(&ThreadedIoFile::TaskHandler, this));
return true;
}
@ -60,7 +52,7 @@ bool ThreadedIoFile::Close() {
result = Flush();
cache_.Close();
task_exit_event_.Wait();
WaitForSignal(&task_exited_mutex_, &task_exited_);
result &= internal_file_.release()->Close();
delete this;
@ -111,9 +103,15 @@ bool ThreadedIoFile::Flush() {
if (internal_file_error_.load(std::memory_order_relaxed))
return false;
{
absl::MutexLock lock(&flush_mutex_);
flushing_ = true;
flush_complete_ = false;
}
cache_.Close();
flush_complete_event_.Wait();
WaitForSignal(&flush_mutex_, &flush_complete_);
return internal_file_->Flush();
}
@ -128,7 +126,8 @@ bool ThreadedIoFile::Seek(uint64_t position) {
// Reading. Close cache, wait for thread task to exit, seek, and re-post
// the task.
cache_.Close();
task_exit_event_.Wait();
WaitForSignal(&task_exited_mutex_, &task_exited_);
bool result = internal_file_->Seek(position);
if (!result) {
// Seek failed. Seek to logical position instead.
@ -138,10 +137,9 @@ bool ThreadedIoFile::Seek(uint64_t position) {
}
cache_.Reopen();
eof_ = false;
base::WorkerPool::PostTask(
FROM_HERE,
base::Bind(&ThreadedIoFile::TaskHandler, base::Unretained(this)),
true /* task_is_slow */);
ThreadPool::instance.PostTask(
std::bind(&ThreadedIoFile::TaskHandler, this));
if (!result)
return false;
}
@ -157,11 +155,20 @@ bool ThreadedIoFile::Tell(uint64_t* position) {
}
void ThreadedIoFile::TaskHandler() {
{
absl::MutexLock lock(&task_exited_mutex_);
task_exited_ = false;
}
if (mode_ == kInputMode)
RunInInputMode();
else
RunInOutputMode();
task_exit_event_.Signal();
{
absl::MutexLock lock(&task_exited_mutex_);
task_exited_ = true;
}
}
void ThreadedIoFile::RunInInputMode() {
@ -190,10 +197,11 @@ void ThreadedIoFile::RunInOutputMode() {
while (true) {
uint64_t write_bytes = cache_.Read(&io_buffer_[0], io_buffer_.size());
if (write_bytes == 0) {
absl::MutexLock lock(&flush_mutex_);
if (flushing_) {
cache_.Reopen();
flushing_ = false;
flush_complete_event_.Signal();
flush_complete_ = true;
} else {
return;
}
@ -205,9 +213,11 @@ void ThreadedIoFile::RunInOutputMode() {
if (write_result < 0) {
internal_file_error_.store(write_result, std::memory_order_relaxed);
cache_.Close();
absl::MutexLock lock(&flush_mutex_);
if (flushing_) {
flushing_ = false;
flush_complete_event_.Signal();
flush_complete_ = true;
}
return;
}
@ -217,4 +227,15 @@ void ThreadedIoFile::RunInOutputMode() {
}
}
void ThreadedIoFile::WaitForSignal(absl::Mutex* mutex, bool* condition) {
// This waits until the boolean condition variable is true, then locks the
// mutex. The check is done every time the mutex is unlocked. As long as
// this mutex is held when the variable is modified, this wait will always
// wake up when the variable is changed to true.
mutex->LockWhen(absl::Condition(condition));
// LockWhen leaves the mutex locked. Return after unlocking the mutex again.
mutex->Unlock();
}
} // namespace shaka

View File

@ -9,7 +9,8 @@
#include <atomic>
#include <memory>
#include "packager/base/synchronization/waitable_event.h"
#include "absl/synchronization/mutex.h"
#include "packager/file/file.h"
#include "packager/file/file_closer.h"
#include "packager/file/io_cache.h"
@ -48,6 +49,7 @@ class ThreadedIoFile : public File {
void TaskHandler();
void RunInInputMode();
void RunInOutputMode();
void WaitForSignal(absl::Mutex* mutex, bool* condition);
std::unique_ptr<File, FileCloser> internal_file_;
const Mode mode_;
@ -56,11 +58,14 @@ class ThreadedIoFile : public File {
uint64_t position_;
uint64_t size_;
std::atomic<bool> eof_;
bool flushing_;
base::WaitableEvent flush_complete_event_;
std::atomic<int32_t> internal_file_error_;
// Signalled when thread task exits.
base::WaitableEvent task_exit_event_;
std::atomic<int64_t> internal_file_error_;
absl::Mutex flush_mutex_;
bool flushing_ GUARDED_BY(flush_mutex_);
bool flush_complete_ GUARDED_BY(flush_mutex_);
absl::Mutex task_exited_mutex_;
bool task_exited_ GUARDED_BY(task_exited_mutex_);
DISALLOW_COPY_AND_ASSIGN(ThreadedIoFile);
};

View File

@ -7,34 +7,28 @@
#include "packager/file/udp_file.h"
#if defined(OS_WIN)
#include <windows.h>
#include <ws2tcpip.h>
#define close closesocket
#define EINTR_CODE WSAEINTR
#else
#include <arpa/inet.h>
#include <errno.h>
#include <strings.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/time.h>
#include <unistd.h>
#define INVALID_SOCKET -1
#define EINTR_CODE EINTR
// IP_MULTICAST_ALL has been supported since kernel version 2.6.31 but we may be
// building on a machine that is older than that.
#ifndef IP_MULTICAST_ALL
#define IP_MULTICAST_ALL 49
#endif
#endif // defined(OS_WIN)
#include <limits>
#include "packager/base/logging.h"
#include "glog/logging.h"
#include "packager/file/udp_options.h"
namespace shaka {
@ -83,15 +77,17 @@ int64_t UdpFile::Read(void* buffer, uint64_t length) {
int64_t result;
do {
result =
recvfrom(socket_, reinterpret_cast<char*>(buffer), length, 0, NULL, 0);
result = recvfrom(socket_, reinterpret_cast<char*>(buffer),
static_cast<int>(length), 0, NULL, 0);
} while (result == -1 && GetSocketErrorCode() == EINTR_CODE);
return result;
}
int64_t UdpFile::Write(const void* buffer, uint64_t length) {
NOTIMPLEMENTED();
UNUSED(buffer);
UNUSED(length);
NOTIMPLEMENTED() << "UdpFile is unwritable!";
return -1;
}
@ -103,17 +99,19 @@ int64_t UdpFile::Size() {
}
bool UdpFile::Flush() {
NOTIMPLEMENTED();
NOTIMPLEMENTED() << "UdpFile is unflushable!";
return false;
}
bool UdpFile::Seek(uint64_t position) {
NOTIMPLEMENTED();
UNUSED(position);
NOTIMPLEMENTED() << "UdpFile is unseekable!";
return false;
}
bool UdpFile::Tell(uint64_t* position) {
NOTIMPLEMENTED();
UNUSED(position);
NOTIMPLEMENTED() << "UdpFile is unseekable!";
return false;
}
@ -170,10 +168,12 @@ bool UdpFile::Open() {
return false;
}
struct sockaddr_in local_sock_addr = {0};
// TODO(kqyang): Support IPv6.
struct sockaddr_in local_sock_addr;
memset(&local_sock_addr, 0, sizeof(local_sock_addr));
local_sock_addr.sin_family = AF_INET;
local_sock_addr.sin_port = htons(options->port());
const bool is_multicast = IsIpv4MulticastAddress(local_in_addr);
if (is_multicast) {
local_sock_addr.sin_addr.s_addr = htonl(INADDR_ANY);

View File

@ -11,10 +11,10 @@
#include <string>
#include "packager/base/compiler_specific.h"
#include "packager/file/file.h"
#if defined(OS_WIN)
#include <windows.h>
#include <winsock2.h>
#else
typedef int SOCKET;

View File

@ -6,12 +6,15 @@
#include "packager/file/udp_options.h"
#include <gflags/gflags.h>
#include "absl/flags/flag.h"
#include "absl/strings/numbers.h"
#include "absl/strings/str_split.h"
#include "glog/logging.h"
#include "packager/common.h"
#include "packager/kv_pairs/kv_pairs.h"
#include "packager/base/strings/string_number_conversions.h"
#include "packager/base/strings/string_split.h"
DEFINE_string(udp_interface_address,
ABSL_FLAG(std::string,
udp_interface_address,
"",
"IP address of the interface over which to receive UDP unicast"
" or multicast streams");
@ -50,47 +53,46 @@ FieldType GetFieldType(const std::string& field_name) {
return kUnknownField;
}
bool StringToAddressAndPort(base::StringPiece addr_and_port,
bool StringToAddressAndPort(std::string_view addr_and_port,
std::string* addr,
uint16_t* port) {
DCHECK(addr);
DCHECK(port);
const size_t colon_pos = addr_and_port.find(':');
if (colon_pos == base::StringPiece::npos) {
if (colon_pos == std::string_view::npos) {
return false;
}
*addr = addr_and_port.substr(0, colon_pos).as_string();
unsigned port_value;
if (!base::StringToUint(addr_and_port.substr(colon_pos + 1), &port_value) ||
*addr = addr_and_port.substr(0, colon_pos);
// NOTE: SimpleAtoi will not take a uint16_t. So we check the bounds of the
// value and then cast to uint16_t.
uint32_t port_value;
if (!absl::SimpleAtoi(addr_and_port.substr(colon_pos + 1), &port_value) ||
(port_value > 65535)) {
return false;
}
*port = port_value;
*port = static_cast<uint16_t>(port_value);
return true;
}
} // namespace
std::unique_ptr<UdpOptions> UdpOptions::ParseFromString(
base::StringPiece udp_url) {
std::string_view udp_url) {
std::unique_ptr<UdpOptions> options(new UdpOptions);
const size_t question_mark_pos = udp_url.find('?');
base::StringPiece address_str = udp_url.substr(0, question_mark_pos);
std::string_view address_str = udp_url.substr(0, question_mark_pos);
if (question_mark_pos != base::StringPiece::npos) {
base::StringPiece options_str = udp_url.substr(question_mark_pos + 1);
if (question_mark_pos != std::string_view::npos) {
std::string_view options_str = udp_url.substr(question_mark_pos + 1);
std::vector<KVPair> kv_pairs = SplitStringIntoKeyValuePairs(options_str);
base::StringPairs pairs;
if (!base::SplitStringIntoKeyValuePairs(options_str, '=', '&', &pairs)) {
LOG(ERROR) << "Invalid udp options name/value pairs " << options_str;
return nullptr;
}
for (const auto& pair : pairs) {
for (const auto& pair : kv_pairs) {
switch (GetFieldType(pair.first)) {
case kBufferSizeField:
if (!base::StringToInt(pair.second, &options->buffer_size_)) {
if (!absl::SimpleAtoi(pair.second, &options->buffer_size_)) {
LOG(ERROR) << "Invalid udp option for buffer_size field "
<< pair.second;
return nullptr;
@ -105,7 +107,7 @@ std::unique_ptr<UdpOptions> UdpOptions::ParseFromString(
break;
case kReuseField: {
int reuse_value = 0;
if (!base::StringToInt(pair.second, &reuse_value)) {
if (!absl::SimpleAtoi(pair.second, &reuse_value)) {
LOG(ERROR) << "Invalid udp option for reuse field " << pair.second;
return nullptr;
}
@ -113,7 +115,7 @@ std::unique_ptr<UdpOptions> UdpOptions::ParseFromString(
break;
}
case kTimeoutField:
if (!base::StringToUint(pair.second, &options->timeout_us_)) {
if (!absl::SimpleAtoi(pair.second, &options->timeout_us_)) {
LOG(ERROR) << "Invalid udp option for timeout field "
<< pair.second;
return nullptr;
@ -127,11 +129,11 @@ std::unique_ptr<UdpOptions> UdpOptions::ParseFromString(
}
}
if (!FLAGS_udp_interface_address.empty()) {
if (!absl::GetFlag(FLAGS_udp_interface_address).empty()) {
LOG(WARNING) << "--udp_interface_address is deprecated. Consider switching "
"to udp options instead, something like "
"udp:://ip:port?interface=interface_ip.";
options->interface_address_ = FLAGS_udp_interface_address;
options->interface_address_ = absl::GetFlag(FLAGS_udp_interface_address);
}
if (!StringToAddressAndPort(address_str, &options->address_,

View File

@ -7,8 +7,6 @@
#include <memory>
#include <string>
#include "packager/base/strings/string_piece.h"
namespace shaka {
/// Options parsed from UDP url string of the form: udp://ip:port[?options]
@ -19,7 +17,7 @@ class UdpOptions {
/// Parse from UDP url.
/// @param udp_url is the url of the form udp://ip:port[?options]
/// @returns a UdpOptions object on success, nullptr otherwise.
static std::unique_ptr<UdpOptions> ParseFromString(base::StringPiece udp_url);
static std::unique_ptr<UdpOptions> ParseFromString(std::string_view udp_url);
const std::string& address() const { return address_; }
uint16_t port() const { return port_; }

View File

@ -6,16 +6,24 @@
#include "packager/file/udp_options.h"
#include <gflags/gflags.h>
#include <gtest/gtest.h>
DECLARE_string(udp_interface_address);
#include "absl/flags/declare.h"
#include "absl/flags/flag.h"
#include "packager/flag_saver.h"
ABSL_DECLARE_FLAG(std::string, udp_interface_address);
namespace shaka {
class UdpOptionsTest : public testing::Test {
public:
void SetUp() override { FLAGS_udp_interface_address = ""; }
UdpOptionsTest() : saver(&FLAGS_udp_interface_address) {}
void SetUp() override { absl::SetFlag(&FLAGS_udp_interface_address, ""); }
private:
FlagSaver<std::string> saver;
};
TEST_F(UdpOptionsTest, AddressAndPort) {
@ -47,7 +55,7 @@ TEST_F(UdpOptionsTest, MissingAddress) {
}
TEST_F(UdpOptionsTest, UdpInterfaceAddressFlag) {
FLAGS_udp_interface_address = "10.11.12.13";
absl::SetFlag(&FLAGS_udp_interface_address, "10.11.12.13");
auto options = UdpOptions::ParseFromString("224.1.2.30:88");
ASSERT_TRUE(options);

33
packager/flag_saver.h Normal file
View File

@ -0,0 +1,33 @@
// Copyright 2022 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_FLAG_SAVER_H_
#define PACKAGER_FLAG_SAVER_H_
#include "absl/flags/flag.h"
namespace shaka {
/// A replacement for gflags' FlagSaver, which is used in testing.
/// A FlagSaver is an RAII object to save and restore the values of
/// command-line flags during a test. Unlike the gflags version, flags to be
/// saved and restored must be listed explicitly.
template <typename T>
class FlagSaver {
public:
FlagSaver(absl::Flag<T>* flag)
: flag_(flag), original_value_(absl::GetFlag(*flag)) {}
~FlagSaver() { absl::SetFlag(flag_, original_value_); }
private:
absl::Flag<T>* flag_; // unowned
T original_value_;
};
} // namespace shaka
#endif // PACKAGER_FLAG_SAVER_H_

View File

@ -0,0 +1,20 @@
# Copyright 2022 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
add_library(kv_pairs STATIC
kv_pairs.cc)
target_link_libraries(kv_pairs
absl::str_format
glog)
add_executable(kv_pairs_unittest
kv_pairs_unittest.cc)
target_link_libraries(kv_pairs_unittest
kv_pairs
gmock
gtest
gtest_main)
add_test(NAME kv_pairs_unittest COMMAND kv_pairs_unittest)

View File

@ -0,0 +1,30 @@
// Copyright 2022 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/kv_pairs/kv_pairs.h"
#include "absl/strings/str_split.h"
namespace shaka {
std::vector<KVPair> SplitStringIntoKeyValuePairs(std::string_view str) {
std::vector<KVPair> kv_pairs;
// Edge case: 0 pairs.
if (str.size() == 0) {
return kv_pairs;
}
std::vector<std::string> kv_strings = absl::StrSplit(str, '&');
for (const auto& kv_string : kv_strings) {
KVPair pair = absl::StrSplit(kv_string, absl::MaxSplits('=', 1));
kv_pairs.push_back(pair);
}
return kv_pairs;
}
} // namespace shaka

View File

@ -0,0 +1,17 @@
// Copyright 2022 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include <string>
#include <string_view>
#include <utility>
#include <vector>
namespace shaka {
typedef std::pair<std::string, std::string> KVPair;
std::vector<KVPair> SplitStringIntoKeyValuePairs(std::string_view str);
} // namespace shaka

View File

@ -0,0 +1,38 @@
// Copyright 2014 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include "packager/kv_pairs/kv_pairs.h"
namespace shaka {
using ::std::make_pair;
using ::testing::ElementsAre;
TEST(KVPairs, Empty) {
ASSERT_THAT(SplitStringIntoKeyValuePairs(""), ElementsAre());
}
TEST(KVPairs, Single) {
ASSERT_THAT(SplitStringIntoKeyValuePairs("a=b"),
ElementsAre(make_pair("a", "b")));
}
TEST(KVPairs, Multiple) {
ASSERT_THAT(SplitStringIntoKeyValuePairs("a=b&c=d&e=f"),
ElementsAre(make_pair("a", "b"), make_pair("c", "d"),
make_pair("e", "f")));
}
TEST(KVPairs, ExtraEqualsSigns) {
ASSERT_THAT(SplitStringIntoKeyValuePairs("a=b&c==d&e=f=g=h"),
ElementsAre(make_pair("a", "b"), make_pair("c", "=d"),
make_pair("e", "f=g=h")));
}
} // namespace shaka

View File

@ -0,0 +1,24 @@
# Copyright 2022 Google Inc. All rights reserved.
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
add_library(status STATIC
status.cc)
target_link_libraries(status
absl::str_format
glog)
if(LIBPACKAGER_SHARED)
target_compile_definitions(status PUBLIC SHAKA_IMPLEMENTATION)
endif()
add_executable(status_unittest
status_unittest.cc)
target_link_libraries(status_unittest
status
gmock
gtest
gtest_main)
add_test(NAME status_unittest COMMAND status_unittest)

View File

@ -4,10 +4,11 @@
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#include "packager/status.h"
#include "packager/status/status.h"
#include "packager/base/logging.h"
#include "packager/base/strings/stringprintf.h"
#include "absl/strings/str_format.h"
#include "glog/logging.h"
#include "packager/common.h"
namespace shaka {
@ -84,7 +85,7 @@ std::string Status::ToString() const {
if (error_code_ == error::OK)
return "OK";
return base::StringPrintf("%d (%s): %s", error_code_,
return absl::StrFormat("%d (%s): %s", error_code_,
error::ErrorCodeToString(error_code_),
error_message_.c_str());
}

View File

@ -7,8 +7,8 @@
#include <gmock/gmock.h>
#include <gtest/gtest.h>
#include "packager/base/strings/string_number_conversions.h"
#include "packager/status.h"
#include "absl/strings/str_format.h"
#include "packager/status/status.h"
namespace shaka {
@ -24,7 +24,7 @@ static void CheckStatus(const Status& s,
} else {
EXPECT_TRUE(!s.ok());
EXPECT_THAT(s.ToString(), testing::HasSubstr(message));
EXPECT_THAT(s.ToString(), testing::HasSubstr(base::UintToString(code)));
EXPECT_THAT(s.ToString(), testing::HasSubstr(absl::StrFormat("%d", code)));
}
}

View File

@ -1,15 +0,0 @@
// Copyright 2014 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
extern "C" void __gcov_flush();
namespace coverage_util {
void FlushCoverageDataIfNecessary() {
#if defined(ENABLE_TEST_CODE_COVERAGE)
__gcov_flush();
#endif
}
} // namespace coverage_util

View File

@ -1,17 +0,0 @@
// Copyright 2014 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef TESTING_COVERAGE_UTIL_IOS_H_
#define TESTING_COVERAGE_UTIL_IOS_H_
namespace coverage_util {
// Flushes .gcda coverage files if ENABLE_TEST_CODE_COVERAGE is defined. iOS 7
// does not call any code at the "end" of an app so flushing should be
// performed manually.
void FlushCoverageDataIfNecessary();
} // namespace coverage_util
#endif // TESTING_COVERAGE_UTIL_IOS_H_

View File

@ -1,29 +1,10 @@
FROM alpine:3.11
FROM alpine:3.12
# Install utilities, libraries, and dev tools.
RUN apk add --no-cache \
bash curl \
bsd-compat-headers c-ares-dev linux-headers \
build-base git ninja python2 python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
WORKDIR /
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH $PATH:/depot_tools
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
# Alpine uses musl which does not have mallinfo defined in malloc.h. Define the
# structure to workaround a Chromium base bug.
RUN sed -i \
'/malloc_usable_size/a \\nstruct mallinfo {\n int arena;\n int hblkhd;\n int uordblks;\n};' \
/usr/include/malloc.h
ENV GYP_DEFINES='musl=1'
build-base cmake git python3
# Build and run this docker by mapping shaka-packager with
# -v "shaka-packager:/shaka-packager".

View File

@ -4,19 +4,7 @@ FROM archlinux:latest
RUN pacman -Sy --needed --noconfirm \
core/which \
c-ares \
gcc git python2 python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
WORKDIR /
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH /depot_tools:$PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
cmake gcc git make python3
# Build and run this docker by mapping shaka-packager with
# -v "shaka-packager:/shaka-packager".

View File

@ -10,19 +10,11 @@ RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|
RUN yum install -y \
which \
c-ares-devel libatomic \
gcc-c++ git python2 python3
cmake gcc-toolset-9-gcc gcc-toolset-9-gcc-c++ git python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
WORKDIR /
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH /depot_tools:$PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
# CentOS 8 is old, and the g++ version installed doesn't automatically link the
# C++ filesystem library as it should. Activate a newer dev environment.
ENV PATH="/opt/rh/gcc-toolset-9/root/usr/bin:$PATH"
# Build and run this docker by mapping shaka-packager with
# -v "shaka-packager:/shaka-packager".

View File

@ -1,22 +1,11 @@
FROM debian:9
FROM debian:11
# Install utilities, libraries, and dev tools.
RUN apt-get update && apt-get install -y apt-utils
RUN apt-get install -y \
curl \
libc-ares-dev \
build-essential git python python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH /depot_tools:$PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
build-essential cmake git python3
# Build and run this docker by mapping shaka-packager with
# -v "shaka-packager:/shaka-packager".

View File

@ -4,19 +4,7 @@ FROM fedora:34
RUN yum install -y \
which \
c-ares-devel libatomic \
gcc-c++ git python2
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
WORKDIR /
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH /depot_tools:$PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
cmake gcc-c++ git python3
# Build and run this docker by mapping shaka-packager with
# -v "shaka-packager:/shaka-packager".

View File

@ -1,22 +1,12 @@
FROM opensuse/leap:15
# Older versions of OpenSUSE (like leap 15) have compilers too old for C++17.
# Tumbleweed is a rolling release system, but that's our only option now.
FROM opensuse/tumbleweed:latest
# Install utilities, libraries, and dev tools.
RUN zypper in -y \
curl which \
c-ares-devel \
gcc-c++ git python python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
WORKDIR /
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH /depot_tools:$PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
cmake gcc-c++ git python3
# Build and run this docker by mapping shaka-packager with
# -v "shaka-packager:/shaka-packager".

View File

@ -1,22 +1,14 @@
FROM ubuntu:18.04
FROM ubuntu:20.04
# Tell apt not to prompt us for anything.
ENV DEBIAN_FRONTEND noninteractive
# Install utilities, libraries, and dev tools.
RUN apt-get update && apt-get install -y apt-utils
RUN apt-get install -y \
curl \
libc-ares-dev \
build-essential git python python3
# Default to python2 because our build system is ancient.
RUN ln -sf python2 /usr/bin/python
# Install depot_tools.
RUN git clone -b chrome/4147 https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN touch depot_tools/.disable_auto_update
ENV PATH /depot_tools:$PATH
# Bypass VPYTHON included by depot_tools. Prefer the system installation.
ENV VPYTHON_BYPASS="manually managed python not supported by chrome operations"
build-essential cmake git python3
# Build and run this docker by mapping shaka-packager with
# -v "shaka-packager:/shaka-packager".

View File

@ -1,66 +0,0 @@
#!/bin/bash
# Exit on first error.
set -e
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PACKAGER_DIR="$(dirname "$(dirname "$(dirname "$(dirname ${SCRIPT_DIR})")")")"
function docker_run() {
docker run -v ${PACKAGER_DIR}:/shaka-packager -w /shaka-packager/src ${CONTAINER} "$@"
}
# Command line arguments will be taken as an allowlist of OSes to run.
# By default, a regex that matches everything.
FILTER=".*"
if [[ $# != 0 ]]; then
# Join arguments with a pipe, to make a regex alternation to match any of
# them. The syntax is a mess, but that's bash. Set IFS (the separator
# variable) in a subshell and print the array. This has the effect of joining
# them by the character in IFS. Then add parentheses to make a complete regex
# to match all the arguments.
FILTER=$(IFS="|"; echo "$*")
FILTER="($FILTER)"
fi
# On exit, print the name of the OS we were on. This helps identify what to
# debug when the start of a test run scrolls off-screen.
trap 'echo "Failed on $OS_NAME!"' exit
echo "Using OS filter: $FILTER"
RAN_SOMETHING=0
for DOCKER_FILE in ${SCRIPT_DIR}/*_Dockerfile ; do
# Take the basename of the dockerfile path, then remove the trailing
# "_Dockerfile" from the file name. This is the OS name.
OS_NAME="$( basename "$DOCKER_FILE" | sed -e 's/_Dockerfile//' )"
if echo "$OS_NAME" | grep -Eqi "$FILTER"; then
echo "Testing $OS_NAME."
# Fall through.
else
echo "Skipping $OS_NAME."
continue
fi
# Build a unique container name per OS for debugging purposes and to improve
# caching. Containers names must be in lowercase.
# To debug a failure in Alpine, for example, use:
# docker run -it -v /path/to/packager:/shaka-packager \
# packager_test_alpine:latest /bin/bash
CONTAINER="$( echo "packager_test_${OS_NAME}" | tr A-Z a-z )"
RAN_SOMETHING=1
docker build -t ${CONTAINER} -f ${DOCKER_FILE} ${SCRIPT_DIR}
docker_run rm -rf out/Release
docker_run gclient runhooks
docker_run ninja -C out/Release
docker_run out/Release/packager_test.py -v
done
# Clear the exit trap from above.
trap - exit
if [[ "$RAN_SOMETHING" == "0" ]]; then
echo "No tests were run! The filter $FILTER did not match any OSes." 1>&2
exit 1
fi

View File

@ -1,65 +0,0 @@
# Copyright (c) 2009 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'targets': [
{
'target_name': 'gmock',
'type': 'static_library',
'dependencies': [
'gtest.gyp:gtest',
],
'sources': [
# Sources based on files in r173 of gmock.
'gmock/include/gmock/gmock-actions.h',
'gmock/include/gmock/gmock-cardinalities.h',
'gmock/include/gmock/gmock-generated-actions.h',
'gmock/include/gmock/gmock-generated-function-mockers.h',
'gmock/include/gmock/gmock-generated-matchers.h',
'gmock/include/gmock/gmock-generated-nice-strict.h',
'gmock/include/gmock/gmock-matchers.h',
'gmock/include/gmock/gmock-spec-builders.h',
'gmock/include/gmock/gmock.h',
'gmock/include/gmock/internal/gmock-generated-internal-utils.h',
'gmock/include/gmock/internal/gmock-internal-utils.h',
'gmock/include/gmock/internal/gmock-port.h',
'gmock/src/gmock-all.cc',
'gmock/src/gmock-cardinalities.cc',
'gmock/src/gmock-internal-utils.cc',
'gmock/src/gmock-matchers.cc',
'gmock/src/gmock-spec-builders.cc',
'gmock/src/gmock.cc',
"gmock_custom/gmock/internal/custom/gmock-port.h",
'gmock_mutant.h', # gMock helpers
],
'sources!': [
'gmock/src/gmock-all.cc', # Not needed by our build.
],
'include_dirs': [
'gmock',
'gmock_custom',
'gmock/include',
],
'direct_dependent_settings': {
'include_dirs': [
'gmock_custom',
'gmock/include', # Allow #include <gmock/gmock.h>
],
},
'export_dependent_settings': [
'gtest.gyp:gtest',
],
},
{
'target_name': 'gmock_main',
'type': 'static_library',
'dependencies': [
'gmock',
],
'sources': [
'gmock/src/gmock_main.cc',
],
},
],
}

View File

@ -1,28 +0,0 @@
// Copyright 2017 Google Inc. All rights reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file or at
// https://developers.google.com/open-source/licenses/bsd
#ifndef PACKAGER_TESTING_GMOCK_CUSTOM_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
#define PACKAGER_TESTING_GMOCK_CUSTOM_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
#include <type_traits>
namespace std {
// Provide alternative implementation of std::is_default_constructible for
// old, pre-4.7 of libstdc++, where is_default_constructible is missing.
// <20120322 below implies pre-4.7.0. In addition we blacklist several version
// that released after 4.7.0 from pre-4.7.0 branch. 20120702 implies 4.5.4, and
// 20121127 implies 4.6.4.
#if defined(__GLIBCXX__) && \
(__GLIBCXX__ < 20120322 || __GLIBCXX__ == 20120702 || \
__GLIBCXX__ == 20121127)
template <typename T>
using is_default_constructible = std::is_constructible<T>;
#endif
} // namespace std
#endif // PACKAGER_TESTING_GMOCK_CUSTOM_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_

File diff suppressed because it is too large Load Diff

View File

@ -1,218 +0,0 @@
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'includes': [
'gtest.gypi',
],
'targets': [
{
'target_name': 'gtest',
'toolsets': ['host', 'target'],
'type': 'static_library',
'sources': [
'<@(gtest_sources)',
],
'include_dirs': [
'gtest',
'gtest/include',
],
'dependencies': [
'gtest_prod',
],
'defines': [
# In order to allow regex matches in gtest to be shared between Windows
# and other systems, we tell gtest to always use it's internal engine.
'GTEST_HAS_POSIX_RE=0',
'GTEST_LANG_CXX11=1',
],
'all_dependent_settings': {
'defines': [
'GTEST_HAS_POSIX_RE=0',
'GTEST_LANG_CXX11=1',
],
},
'conditions': [
['OS == "mac" or OS == "ios"', {
'sources': [
'gtest_mac.h',
'gtest_mac.mm',
],
'link_settings': {
'libraries': [
'$(SDKROOT)/System/Library/Frameworks/Foundation.framework',
],
},
}],
['OS == "mac"', {
'sources': [
'platform_test_mac.mm',
],
}],
['OS == "ios"', {
'dependencies' : [
'<(DEPTH)/testing/iossim/iossim.gyp:iossim#host',
],
'direct_dependent_settings': {
'target_conditions': [
# Turn all tests into bundles on iOS because that's the only
# type of executable supported for iOS.
['_type=="executable"', {
'variables': {
# Use a variable so the path gets fixed up so it is always
# correct when INFOPLIST_FILE finally gets set.
'ios_unittest_info_plist_path':
'<(DEPTH)/testing/gtest_ios/unittest-Info.plist',
},
'mac_bundle': 1,
'xcode_settings': {
'BUNDLE_ID_TEST_NAME':
'>!(echo ">(_target_name)" | sed -e "s/_//g")',
'INFOPLIST_FILE': '>(ios_unittest_info_plist_path)',
},
'mac_bundle_resources': [
'<(ios_unittest_info_plist_path)',
'<(DEPTH)/testing/gtest_ios/Default-568h@2x.png',
],
'mac_bundle_resources!': [
'<(ios_unittest_info_plist_path)',
],
}],
],
},
'sources': [
'coverage_util_ios.cc',
'coverage_util_ios.h',
'platform_test_ios.mm',
],
}],
['OS=="ios" and asan==1', {
'direct_dependent_settings': {
'target_conditions': [
# Package the ASan runtime dylib into the test app bundles.
['_type=="executable"', {
'postbuilds': [
{
'variables': {
# Define copy_asan_dylib_path in a variable ending in
# _path so that gyp understands it's a path and
# performs proper relativization during dict merging.
'copy_asan_dylib_path':
'<(DEPTH)/build/mac/copy_asan_runtime_dylib.sh',
},
'postbuild_name': 'Copy ASan runtime dylib',
'action': [
'>(copy_asan_dylib_path)',
],
},
],
}],
],
},
}],
['os_posix == 1', {
'defines': [
# gtest isn't able to figure out when RTTI is disabled for gcc
# versions older than 4.3.2, and assumes it's enabled. Our Mac
# and Linux builds disable RTTI, and cannot guarantee that the
# compiler will be 4.3.2. or newer. The Mac, for example, uses
# 4.2.1 as that is the latest available on that platform. gtest
# must be instructed that RTTI is disabled here, and for any
# direct dependents that might include gtest headers.
'GTEST_HAS_RTTI=0',
],
'direct_dependent_settings': {
'defines': [
'GTEST_HAS_RTTI=0',
],
},
}],
['OS=="android" and android_app_abi=="x86"', {
'defines': [
'GTEST_HAS_CLONE=0',
],
'direct_dependent_settings': {
'defines': [
'GTEST_HAS_CLONE=0',
],
},
}],
['OS=="android"', {
# We want gtest features that use tr1::tuple, but we currently
# don't support the variadic templates used by libstdc++'s
# implementation. gtest supports this scenario by providing its
# own implementation but we must opt in to it.
'defines': [
'GTEST_USE_OWN_TR1_TUPLE=1',
# GTEST_USE_OWN_TR1_TUPLE only works if GTEST_HAS_TR1_TUPLE is set.
# gtest r625 made it so that GTEST_HAS_TR1_TUPLE is set to 0
# automatically on android, so it has to be set explicitly here.
'GTEST_HAS_TR1_TUPLE=1',
],
'direct_dependent_settings': {
'defines': [
'GTEST_USE_OWN_TR1_TUPLE=1',
'GTEST_HAS_TR1_TUPLE=1',
],
},
}],
],
'direct_dependent_settings': {
'defines': [
'UNIT_TEST',
],
'include_dirs': [
'gtest/include', # So that gtest headers can find themselves.
],
'target_conditions': [
['_type=="executable"', {
'test': 1,
'conditions': [
['OS=="mac"', {
'run_as': {
'action????': ['${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}'],
},
}],
['OS=="ios"', {
'variables': {
# Use a variable so the path gets fixed up so it is always
# correct when the action finally gets used.
'ios_run_unittest_script_path':
'<(DEPTH)/testing/gtest_ios/run-unittest.sh',
},
'run_as': {
'action????': ['>(ios_run_unittest_script_path)'],
},
}],
['OS=="win"', {
'run_as': {
'action????': ['$(TargetPath)', '--gtest_print_time'],
},
}],
],
}],
],
'msvs_disabled_warnings': [4800],
},
},
{
'target_name': 'gtest_main',
'type': 'static_library',
'dependencies': [
'gtest',
],
'sources': [
'gtest/src/gtest_main.cc',
],
},
{
'target_name': 'gtest_prod',
'toolsets': ['host', 'target'],
'type': 'none',
'sources': [
'gtest/include/gtest/gtest_prod.h',
],
},
],
}

View File

@ -1,41 +0,0 @@
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'variables': {
'gtest_sources': [
'gtest/include/gtest/gtest-death-test.h',
'gtest/include/gtest/gtest-message.h',
'gtest/include/gtest/gtest-param-test.h',
'gtest/include/gtest/gtest-printers.h',
'gtest/include/gtest/gtest-spi.h',
'gtest/include/gtest/gtest-test-part.h',
'gtest/include/gtest/gtest-typed-test.h',
'gtest/include/gtest/gtest.h',
'gtest/include/gtest/gtest_pred_impl.h',
'gtest/include/gtest/gtest_prod.h',
'gtest/include/gtest/internal/gtest-death-test-internal.h',
'gtest/include/gtest/internal/gtest-filepath.h',
'gtest/include/gtest/internal/gtest-internal.h',
'gtest/include/gtest/internal/gtest-linked_ptr.h',
'gtest/include/gtest/internal/gtest-param-util-generated.h',
'gtest/include/gtest/internal/gtest-param-util.h',
'gtest/include/gtest/internal/gtest-port.h',
'gtest/include/gtest/internal/gtest-string.h',
'gtest/include/gtest/internal/gtest-tuple.h',
'gtest/include/gtest/internal/gtest-type-util.h',
'gtest/src/gtest-death-test.cc',
'gtest/src/gtest-filepath.cc',
'gtest/src/gtest-internal-inl.h',
'gtest/src/gtest-port.cc',
'gtest/src/gtest-printers.cc',
'gtest/src/gtest-test-part.cc',
'gtest/src/gtest-typed-test.cc',
'gtest/src/gtest.cc',
'multiprocess_func_list.cc',
'multiprocess_func_list.h',
'platform_test.h',
],
},
}

View File

@ -1,48 +0,0 @@
// Copyright (c) 2010 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef TESTING_GTEST_MAC_H_
#define TESTING_GTEST_MAC_H_
#include <gtest/internal/gtest-port.h>
#include <gtest/gtest.h>
#ifdef GTEST_OS_MAC
#import <Foundation/Foundation.h>
namespace testing {
namespace internal {
// This overloaded version allows comparison between ObjC objects that conform
// to the NSObject protocol. Used to implement {ASSERT|EXPECT}_EQ().
GTEST_API_ AssertionResult CmpHelperNSEQ(const char* expected_expression,
const char* actual_expression,
id<NSObject> expected,
id<NSObject> actual);
// This overloaded version allows comparison between ObjC objects that conform
// to the NSObject protocol. Used to implement {ASSERT|EXPECT}_NE().
GTEST_API_ AssertionResult CmpHelperNSNE(const char* expected_expression,
const char* actual_expression,
id<NSObject> expected,
id<NSObject> actual);
} // namespace internal
} // namespace testing
// Tests that [expected isEqual:actual].
#define EXPECT_NSEQ(expected, actual) \
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperNSEQ, expected, actual)
#define EXPECT_NSNE(val1, val2) \
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperNSNE, val1, val2)
#define ASSERT_NSEQ(expected, actual) \
ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperNSEQ, expected, actual)
#define ASSERT_NSNE(val1, val2) \
ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperNSNE, val1, val2)
#endif // GTEST_OS_MAC
#endif // TESTING_GTEST_MAC_H_

View File

@ -1,61 +0,0 @@
// Copyright (c) 2010 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#import "gtest_mac.h"
#include <string>
#include <gtest/internal/gtest-port.h>
#include <gtest/internal/gtest-string.h>
#include <gtest/gtest.h>
#ifdef GTEST_OS_MAC
#import <Foundation/Foundation.h>
namespace testing {
namespace internal {
// Handles nil values for |obj| properly by using safe printing of %@ in
// -stringWithFormat:.
static inline const char* StringDescription(id<NSObject> obj) {
return [[NSString stringWithFormat:@"%@", obj] UTF8String];
}
// This overloaded version allows comparison between ObjC objects that conform
// to the NSObject protocol. Used to implement {ASSERT|EXPECT}_EQ().
GTEST_API_ AssertionResult CmpHelperNSEQ(const char* expected_expression,
const char* actual_expression,
id<NSObject> expected,
id<NSObject> actual) {
if (expected == actual || [expected isEqual:actual]) {
return AssertionSuccess();
}
return EqFailure(expected_expression,
actual_expression,
std::string(StringDescription(expected)),
std::string(StringDescription(actual)),
false);
}
// This overloaded version allows comparison between ObjC objects that conform
// to the NSObject protocol. Used to implement {ASSERT|EXPECT}_NE().
GTEST_API_ AssertionResult CmpHelperNSNE(const char* expected_expression,
const char* actual_expression,
id<NSObject> expected,
id<NSObject> actual) {
if (expected != actual && ![expected isEqual:actual]) {
return AssertionSuccess();
}
Message msg;
msg << "Expected: (" << expected_expression << ") != (" << actual_expression
<< "), actual: " << StringDescription(expected)
<< " vs " << StringDescription(actual);
return AssertionFailure(msg);
}
} // namespace internal
} // namespace testing
#endif // GTEST_OS_MAC

View File

@ -1,57 +0,0 @@
// Copyright (c) 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// Note that while this file is in testing/ and tests GTest macros, it is built
// as part of Chromium's unit_tests target because the project does not build
// or run GTest's internal test suite.
#import "testing/gtest_mac.h"
#import <Foundation/Foundation.h>
#include "base/mac/scoped_nsautorelease_pool.h"
#include "testing/gtest/include/gtest/internal/gtest-port.h"
#include "testing/gtest/include/gtest/gtest.h"
TEST(GTestMac, ExpectNSEQ) {
base::mac::ScopedNSAutoreleasePool pool;
EXPECT_NSEQ(@"a", @"a");
NSString* s1 = [NSString stringWithUTF8String:"a"];
NSString* s2 = @"a";
EXPECT_NE(s1, s2);
EXPECT_NSEQ(s1, s2);
}
TEST(GTestMac, AssertNSEQ) {
base::mac::ScopedNSAutoreleasePool pool;
NSString* s1 = [NSString stringWithUTF8String:"a"];
NSString* s2 = @"a";
EXPECT_NE(s1, s2);
ASSERT_NSEQ(s1, s2);
}
TEST(GTestMac, ExpectNSNE) {
base::mac::ScopedNSAutoreleasePool pool;
EXPECT_NSNE([NSNumber numberWithInt:2], [NSNumber numberWithInt:42]);
}
TEST(GTestMac, AssertNSNE) {
base::mac::ScopedNSAutoreleasePool pool;
ASSERT_NSNE(@"a", @"b");
}
TEST(GTestMac, ExpectNSNil) {
base::mac::ScopedNSAutoreleasePool pool;
EXPECT_NSEQ(nil, nil);
EXPECT_NSNE(nil, @"a");
EXPECT_NSNE(@"a", nil);
// TODO(shess): Test that EXPECT_NSNE(nil, nil) fails.
}

View File

@ -1,57 +0,0 @@
// Copyright (c) 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "multiprocess_func_list.h"
#include <map>
// Helper functions to maintain mapping of "test name"->test func.
// The information is accessed via a global map.
namespace multi_process_function_list {
namespace {
struct ProcessFunctions {
ProcessFunctions() : main(NULL), setup(NULL) {}
ProcessFunctions(TestMainFunctionPtr main, SetupFunctionPtr setup)
: main(main),
setup(setup) {
}
TestMainFunctionPtr main;
SetupFunctionPtr setup;
};
typedef std::map<std::string, ProcessFunctions> MultiProcessTestMap;
// Retrieve a reference to the global 'func name' -> func ptr map.
MultiProcessTestMap& GetMultiprocessFuncMap() {
static MultiProcessTestMap test_name_to_func_ptr_map;
return test_name_to_func_ptr_map;
}
} // namespace
AppendMultiProcessTest::AppendMultiProcessTest(
std::string test_name,
TestMainFunctionPtr main_func_ptr,
SetupFunctionPtr setup_func_ptr) {
GetMultiprocessFuncMap()[test_name] =
ProcessFunctions(main_func_ptr, setup_func_ptr);
}
int InvokeChildProcessTest(std::string test_name) {
MultiProcessTestMap& func_lookup_table = GetMultiprocessFuncMap();
MultiProcessTestMap::iterator it = func_lookup_table.find(test_name);
if (it != func_lookup_table.end()) {
const ProcessFunctions& process_functions = it->second;
if (process_functions.setup)
(*process_functions.setup)();
if (process_functions.main)
return (*process_functions.main)();
}
return -1;
}
} // namespace multi_process_function_list

View File

@ -1,70 +0,0 @@
// Copyright (c) 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef TESTING_MULTIPROCESS_FUNC_LIST_H_
#define TESTING_MULTIPROCESS_FUNC_LIST_H_
#include <string>
// This file provides the plumbing to register functions to be executed
// as the main function of a child process in a multi-process test.
// This complements the MultiProcessTest class which provides facilities
// for launching such tests.
//
// The MULTIPROCESS_TEST_MAIN() macro registers a string -> func_ptr mapping
// by creating a new global instance of the AppendMultiProcessTest() class
// this means that by the time that we reach our main() function the mapping
// is already in place.
//
// Example usage:
// MULTIPROCESS_TEST_MAIN(a_test_func) {
// // Code here runs in a child process.
// return 0;
// }
//
// The prototype of a_test_func is implicitly
// int test_main_func_name();
namespace multi_process_function_list {
// Type for child process main functions.
typedef int (*TestMainFunctionPtr)();
// Type for child setup functions.
typedef void (*SetupFunctionPtr)();
// Helper class to append a test function to the global mapping.
// Used by the MULTIPROCESS_TEST_MAIN macro.
class AppendMultiProcessTest {
public:
// |main_func_ptr| is the main function that is run in the child process.
// |setup_func_ptr| is a function run when the global mapping is added.
AppendMultiProcessTest(std::string test_name,
TestMainFunctionPtr main_func_ptr,
SetupFunctionPtr setup_func_ptr);
};
// Invoke the main function of a test previously registered with
// MULTIPROCESS_TEST_MAIN()
int InvokeChildProcessTest(std::string test_name);
// This macro creates a global MultiProcessTest::AppendMultiProcessTest object
// whose constructor does the work of adding the global mapping.
#define MULTIPROCESS_TEST_MAIN(test_main) \
MULTIPROCESS_TEST_MAIN_WITH_SETUP(test_main, NULL)
// Same as above but lets callers specify a setup method that is run in the
// child process, just before the main function is run. This facilitates
// adding a generic one-time setup function for multiple tests.
#define MULTIPROCESS_TEST_MAIN_WITH_SETUP(test_main, test_setup) \
int test_main(); \
namespace { \
multi_process_function_list::AppendMultiProcessTest \
AddMultiProcessTest##_##test_main(#test_main, (test_main), (test_setup)); \
} \
int test_main()
} // namespace multi_process_function_list
#endif // TESTING_MULTIPROCESS_FUNC_LIST_H_

View File

@ -1,8 +0,0 @@
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
source_set("perf") {
sources = [ "perf_test.cc" ]
deps = [ "//base" ]
}

View File

@ -1,204 +0,0 @@
// Copyright 2013 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "testing/perf/perf_test.h"
#include <stdio.h>
#include "base/logging.h"
#include "base/strings/string_number_conversions.h"
#include "base/strings/stringprintf.h"
namespace {
std::string ResultsToString(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& values,
const std::string& prefix,
const std::string& suffix,
const std::string& units,
bool important) {
// <*>RESULT <graph_name>: <trace_name>= <value> <units>
// <*>RESULT <graph_name>: <trace_name>= {<mean>, <std deviation>} <units>
// <*>RESULT <graph_name>: <trace_name>= [<value>,value,value,...,] <units>
return base::StringPrintf("%sRESULT %s%s: %s= %s%s%s %s\n",
important ? "*" : "", measurement.c_str(), modifier.c_str(),
trace.c_str(), prefix.c_str(), values.c_str(), suffix.c_str(),
units.c_str());
}
void PrintResultsImpl(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& values,
const std::string& prefix,
const std::string& suffix,
const std::string& units,
bool important) {
fflush(stdout);
printf("%s", ResultsToString(measurement, modifier, trace, values,
prefix, suffix, units, important).c_str());
fflush(stdout);
}
} // namespace
namespace perf_test {
void PrintResult(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
size_t value,
const std::string& units,
bool important) {
PrintResultsImpl(measurement,
modifier,
trace,
base::UintToString(static_cast<unsigned int>(value)),
std::string(),
std::string(),
units,
important);
}
void PrintResult(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
double value,
const std::string& units,
bool important) {
PrintResultsImpl(measurement,
modifier,
trace,
base::DoubleToString(value),
std::string(),
std::string(),
units,
important);
}
void AppendResult(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
size_t value,
const std::string& units,
bool important) {
output += ResultsToString(
measurement,
modifier,
trace,
base::UintToString(static_cast<unsigned int>(value)),
std::string(),
std::string(),
units,
important);
}
void PrintResult(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& value,
const std::string& units,
bool important) {
PrintResultsImpl(measurement,
modifier,
trace,
value,
std::string(),
std::string(),
units,
important);
}
void AppendResult(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& value,
const std::string& units,
bool important) {
output += ResultsToString(measurement,
modifier,
trace,
value,
std::string(),
std::string(),
units,
important);
}
void PrintResultMeanAndError(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& mean_and_error,
const std::string& units,
bool important) {
PrintResultsImpl(measurement, modifier, trace, mean_and_error,
"{", "}", units, important);
}
void AppendResultMeanAndError(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& mean_and_error,
const std::string& units,
bool important) {
output += ResultsToString(measurement, modifier, trace, mean_and_error,
"{", "}", units, important);
}
void PrintResultList(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& values,
const std::string& units,
bool important) {
PrintResultsImpl(measurement, modifier, trace, values,
"[", "]", units, important);
}
void AppendResultList(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& values,
const std::string& units,
bool important) {
output += ResultsToString(measurement, modifier, trace, values,
"[", "]", units, important);
}
void PrintSystemCommitCharge(const std::string& test_name,
size_t charge,
bool important) {
PrintSystemCommitCharge(stdout, test_name, charge, important);
}
void PrintSystemCommitCharge(FILE* target,
const std::string& test_name,
size_t charge,
bool important) {
fprintf(target, "%s", SystemCommitChargeToString(test_name, charge,
important).c_str());
}
std::string SystemCommitChargeToString(const std::string& test_name,
size_t charge,
bool important) {
std::string trace_name(test_name);
std::string output;
AppendResult(output,
"commit_charge",
std::string(),
"cc" + trace_name,
charge,
"kb",
important);
return output;
}
} // namespace perf_test

View File

@ -1,18 +0,0 @@
# Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'targets': [
{
'target_name': 'perf_test',
'type': 'static_library',
'sources': [
'perf_test.cc',
],
'dependencies': [
'../../base/base.gyp:base',
],
},
],
}

View File

@ -1,116 +0,0 @@
// Copyright 2013 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef TESTING_PERF_PERF_TEST_H_
#define TESTING_PERF_PERF_TEST_H_
#include <string>
namespace perf_test {
// Prints numerical information to stdout in a controlled format, for
// post-processing. |measurement| is a description of the quantity being
// measured, e.g. "vm_peak"; |modifier| is provided as a convenience and
// will be appended directly to the name of the |measurement|, e.g.
// "_browser"; |trace| is a description of the particular data point, e.g.
// "reference"; |value| is the measured value; and |units| is a description
// of the units of measure, e.g. "bytes". If |important| is true, the output
// line will be specially marked, to notify the post-processor. The strings
// may be empty. They should not contain any colons (:) or equals signs (=).
// A typical post-processing step would be to produce graphs of the data
// produced for various builds, using the combined |measurement| + |modifier|
// string to specify a particular graph and the |trace| to identify a trace
// (i.e., data series) on that graph.
void PrintResult(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
size_t value,
const std::string& units,
bool important);
void PrintResult(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
double value,
const std::string& units,
bool important);
void AppendResult(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
size_t value,
const std::string& units,
bool important);
// Like the above version of PrintResult(), but takes a std::string value
// instead of a size_t.
void PrintResult(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& value,
const std::string& units,
bool important);
void AppendResult(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& value,
const std::string& units,
bool important);
// Like PrintResult(), but prints a (mean, standard deviation) result pair.
// The |<values>| should be two comma-separated numbers, the mean and
// standard deviation (or other error metric) of the measurement.
void PrintResultMeanAndError(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& mean_and_error,
const std::string& units,
bool important);
void AppendResultMeanAndError(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& mean_and_error,
const std::string& units,
bool important);
// Like PrintResult(), but prints an entire list of results. The |values|
// will generally be a list of comma-separated numbers. A typical
// post-processing step might produce plots of their mean and standard
// deviation.
void PrintResultList(const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& values,
const std::string& units,
bool important);
void AppendResultList(std::string& output,
const std::string& measurement,
const std::string& modifier,
const std::string& trace,
const std::string& values,
const std::string& units,
bool important);
// Prints memory commit charge stats for use by perf graphs.
void PrintSystemCommitCharge(const std::string& test_name,
size_t charge,
bool important);
void PrintSystemCommitCharge(FILE* target,
const std::string& test_name,
size_t charge,
bool important);
std::string SystemCommitChargeToString(const std::string& test_name,
size_t charge,
bool important);
} // namespace perf_test
#endif // TESTING_PERF_PERF_TEST_H_

View File

@ -1,36 +0,0 @@
// Copyright (c) 2006-2008 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef TESTING_PLATFORM_TEST_H_
#define TESTING_PLATFORM_TEST_H_
#include <gtest/gtest.h>
#if defined(GTEST_OS_MAC)
#ifdef __OBJC__
@class NSAutoreleasePool;
#else
class NSAutoreleasePool;
#endif
// The purpose of this class us to provide a hook for platform-specific
// operations across unit tests. For example, on the Mac, it creates and
// releases an outer NSAutoreleasePool for each test case. For now, it's only
// implemented on the Mac. To enable this for another platform, just adjust
// the #ifdefs and add a platform_test_<platform>.cc implementation file.
class PlatformTest : public testing::Test {
public:
virtual ~PlatformTest();
protected:
PlatformTest();
private:
NSAutoreleasePool* pool_;
};
#else
typedef testing::Test PlatformTest;
#endif // GTEST_OS_MAC
#endif // TESTING_PLATFORM_TEST_H_

View File

@ -1,18 +0,0 @@
// Copyright 2014 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "platform_test.h"
#import <Foundation/Foundation.h>
#include "coverage_util_ios.h"
PlatformTest::PlatformTest()
: pool_([[NSAutoreleasePool alloc] init]) {
}
PlatformTest::~PlatformTest() {
[pool_ release];
coverage_util::FlushCoverageDataIfNecessary();
}

Some files were not shown because too many files have changed in this diff Show More