{"id":13907583,"url":"https://github.com/intel/media-delivery","last_synced_at":"2025-07-18T05:33:05.340Z","repository":{"id":37067739,"uuid":"269478039","full_name":"intel/media-delivery","owner":"intel","description":"This collection of samples demonstrates best practices to achieve optimal video quality and performance on Intel GPUs for content delivery networks. Check out our demo, recommended command lines and quality and performance measuring tools.","archived":true,"fork":false,"pushed_at":"2025-04-21T16:48:33.000Z","size":3735,"stargazers_count":105,"open_issues_count":6,"forks_count":30,"subscribers_count":8,"default_branch":"master","last_synced_at":"2025-04-21T17:43:25.845Z","etag":null,"topics":["cdn","docker","ffmpeg","hls","msdk","nginx","qsv","vod"],"latest_commit_sha":null,"homepage":"","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/intel.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-06-04T22:26:25.000Z","updated_at":"2025-04-21T16:48:37.000Z","dependencies_parsed_at":"2023-12-21T06:58:10.442Z","dependency_job_id":"3a96f3d8-a343-4d72-953a-8f13d9de59b0","html_url":"https://github.com/intel/media-delivery","commit_stats":null,"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"purl":"pkg:github/intel/media-delivery","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fmedia-delivery","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fmedia-delivery/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fmedia-delivery/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fmedia-delivery/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/intel","download_url":"https://codeload.github.com/intel/media-delivery/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fmedia-delivery/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":265705436,"owners_count":23814455,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cdn","docker","ffmpeg","hls","msdk","nginx","qsv","vod"],"created_at":"2024-08-06T23:02:00.555Z","updated_at":"2025-07-18T05:33:05.329Z","avatar_url":"https://github.com/intel.png","language":"Shell","readme":"# PROJECT NOT UNDER ACTIVE MANAGEMENT #  \nThis project will no longer be maintained by Intel.  \nIntel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.  \nIntel no longer accepts patches to this project.  \n If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.  \n  \nMedia Delivery Software Stack\n=============================\n\n.. contents::\n\n.. |virt-guide| replace:: GPU virtualization setup guide\n.. _virt-guide: doc/virtualization.rst\n\nBackground\n----------\n\nIntel GPUs have dedicated hardware that enables fast, energy-efficient encoding \nand decoding of video compliant with the industry standards such as H.264, HEVC and AV1.\nDrivers and software enable application developers to utilize this hardware either\ndirectly from their own applications or from within popular frameworks such as FFmpeg \nand GStreamer.\n\nOverview\n--------\n\nThis repository curates a coherent set of software, drivers and utilities to assist\nusers who are getting started with transcoding video on Intel GPUs so that they can\nbegin evaluating performance and incorporating Intel GPU transcode acceleration into\ntheir applications.\n\nIncluded here are scripts, Docker configurations and documents that provide:\n\n* **Kernel mode drivers.**\n\n  * The latest Intel GPUs are not fully supported by currently available Linux\n    distributions so backported drivers can be installed by following the instructions\n    below.\n\n* **User mode drivers and libraries:**\n\n  * `Intel® Media Driver \u003chttps://github.com/intel/media-driver\u003e`_\n  * `Intel® oneAPI Video Processing Library (oneVPL) \u003chttps://github.com/oneapi-src/oneVPL\u003e`_\n  * `Intel® Media SDK \u003chttps://github.com/Intel-Media-SDK/MediaSDK\u003e`_\n\n* **Transcoding software:**\n\n  * `FFmpeg \u003chttp://ffmpeg.org/\u003e`_ with `ffmpeg-qsv \u003chttps://trac.ffmpeg.org/wiki/Hardware/QuickSync\u003e`_\n    plugins (qsv means \"QuickSync\", a name for Intel's GPU video transcoding technology)\n\n* **Quality and performance measuring software:**\n\n  * Set of `quality \u003cdoc/quality.rst\u003e`_ and `performance \u003cdoc/performance.rst\u003e`_\n    benchmarking scripts\n  * `VMAF \u003chttps://github.com/Netflix/vmaf\u003e`_\n  * `Intel GPU Tools \u003chttps://gitlab.freedesktop.org/drm/igt-gpu-tools\u003e`_\n\n* **Demonstrations:**\n\n  * Samples which demonstrate operations typical for Content Delivery Network (CDN)\n    applications such as Video On Demand (VOD) streaming under Nginx server.\n\n* **Setup guides and user documentation:**\n\n  * |virt-guide|_\n  * `Capabilities of ffmpeg-qsv plugins \u003cdoc/features/ffmpeg-qsv\u003e`_\n\n* `Benchmarking data \u003cdoc/benchmarks/readme.rst\u003e`_ **for specific GPUs:**\n\n  * `Benchmarks for Intel® Data Center GPU Flex Series \u003cdoc/benchmarks/intel-data-center-gpu-flex-series/intel-data-center-gpu-flex-series.rst\u003e`_\n  * `Benchmarks for Intel® Iris® Xe MAX graphics \u003cdoc/benchmarks/intel-iris-xe-max-graphics/intel-iris-xe-max-graphics.md\u003e`_\n\nPrerequisites\n-------------\n\nTo run Intel® Data Center GPU Flex Series\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWe recommend to attach GPU of `Intel® Data Center GPU Flex Series \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/230021/intel-data-center-gpu-flex-series.html\u003e`_\nto the system with the following characteristics:\n\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n|                     | Recommendation                                                                                                            |\n+=====================+===========================================================================================================================+\n| CPU                 | `3rd Generation Intel® Xeon® Scalable Processors                                                                          |\n|                     | \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/204098/3rd-generation-intel-xeon-scalable-processors.html\u003e`_ |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n| Storage             | At least of 20GB of free disk space available for docker                                                                  |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n| Operating System(s) | Ubuntu 20.04 or Ubuntu 22.04 with kernel mode support as described `here \u003cdoc/intel-gpu-dkms.rst\u003e`__                      |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n\nTo run Intel® Arc™ A-Series Graphics\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nGPU of `Intel® Arc™ A-Series Graphics \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/227957/intel-arc-a-series-graphics.html\u003e`_\nshould be attached to the system with the following requirements:\n\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n|                     | Recommendation                                                                                                            |\n+=====================+===========================================================================================================================+\n| CPU                 | One of the following CPUs:                                                                                                |\n|                     |                                                                                                                           |\n|                     | * `12th Generation Intel® Core™ i9 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/217839/12th-generation-intel-core-i9-processors.html\u003e`_    |\n|                     | * `12th Generation Intel® Core™ i7 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/217837/12th-generation-intel-core-i7-processors.html\u003e`_    |\n|                     | * `12th Generation Intel® Core™ i5 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/217838/12th-generation-intel-core-i5-processors.html\u003e`_    |\n|                     | * `12th Generation Intel® Core™ i3 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/217840/12th-generation-intel-core-i3-processors.html\u003e`_    |\n|                     | * `11th Generation Intel® Core™ i9 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/202984/11th-generation-intel-core-i9-processors.html\u003e`_    |\n|                     | * `11th Generation Intel® Core™ i7 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/202986/11th-generation-intel-core-i7-processors.html\u003e`_    |\n|                     | * `11th Generation Intel® Core™ i5 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/202985/11th-generation-intel-core-i5-processors.html\u003e`_    |\n|                     | * `11th Generation Intel® Core™ i3 Processors                                                                             |\n|                     |   \u003chttps://ark.intel.com/content/www/us/en/ark/products/series/202987/11th-generation-intel-core-i3-processors.html\u003e`_    |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n| Storage             | At least of 20GB of free disk space available for docker                                                                  |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n| Operating System(s) | Ubuntu 20.04 or Ubuntu 22.04 with kernel mode support as described `here \u003cdoc/intel-gpu-dkms.rst\u003e`__                      |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n\nTo run upstreamed GPU products\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nTo run upstream GPU products, i.e. those GPUs support for which is available in Linux\ndistributions, you need:\n\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n|                     | Recommendation                                                                                                            |\n+=====================+===========================================================================================================================+\n| Operating System(s) | Linux distribution capable to support your GPU. We recommend Ubuntu 20.04 or later.                                       |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n| Storage             | At least of 20GB of free disk space available for docker                                                                  |\n+---------------------+---------------------------------------------------------------------------------------------------------------------------+\n\nInstallation\n------------\n\nWe provide dockerfiles to setup transcoding environment. These dockerfiles need to be\nbuilt into docker containers. Dockerfiles can also be used as a reference instruction\nto install ingredients on bare metal.\n\nIf you want to run in virtual environment, first follow instruction given in\n|virt-guide|_ and then proceed with the setup described below.\n\nSetup Docker\n~~~~~~~~~~~~\n\nDocker is required to build and to run media delivery containers. If you run Ubuntu 20.04\nor later you can install it as follows::\n\n  sudo apt-get install docker.io\n\nYou might need to further configure docker on your system:\n\n* Allow docker to run under your user account (remember to relogin for group modification\n  to take effect)::\n\n    sudo usermod -aG docker $(whoami) \u0026\u0026 exit\n\n* Consider to register and login to `Docker Hub \u003chttps://hub.docker.com/\u003e`_. Docker Hub\n  limits the number of docker image downloads (\"pulls\"). For anonymous users this limit\n  is tied to IP address. For authenticated users it depends on subscription type. For\n  details see https://docs.docker.com/docker-hub/download-rate-limit/. To authenticate\n  running docker engine, execute::\n\n    # you will be prompted to enter username and password:\n    docker login\n\n* If you run behind a proxy, configure proxies for for docker daemon. Refer to\n  https://docs.docker.com/config/daemon/systemd/. Below example assumes that you\n  have ``http_proxy`` environment variable set in advance::\n\n    sudo mkdir -p /etc/systemd/system/docker.service.d\n    echo \"[Service]\" | sudo tee /etc/systemd/system/docker.service.d/https-proxy.conf\n    echo \"Environment=\\\"HTTPS_PROXY=$http_proxy/\\\"\" | \\\n      sudo tee -a /etc/systemd/system/docker.service.d/https-proxy.conf\n\n    sudo systemctl daemon-reload\n    sudo systemctl restart docker\n\n    sudo systemctl show --property=Environment --no-pager docker\n\n* Make sure that docker has at least 20GB of hard disk space to use. To check available\n  space run (in the example below 39GB are available)::\n\n    $ df -h $(docker info -f '{{ .DockerRootDir}}')\n    Filesystem      Size  Used Avail Use% Mounted on\n    /dev/sda1        74G   32G   39G  46% /\n\n  If disk space is not enough (for example, default ``/var/lib/docker`` is mounted to\n  a small size partition which might be a case for ``/var``), consider reconfiguring\n  docker storage location as follows::\n\n    # Below assumes unaltered default docker installation when\n    # /etc/docker/daemon.json does not exist\n    echo \"{\\\"data-root\\\": \\\"/mnt/newlocation\\\"}\" | sudo tee /etc/docker/daemon.json\n    sudo systemctl daemon-reload\n    sudo systemctl restart docker\n\nSetup Media Delivery\n~~~~~~~~~~~~~~~~~~~~\n\nWe provide few different setup configurations which differ by versions and origins\nof the included Intel media stack components. Some versions of media stack require\nspecial setup for the host.\n\n+----------------------------------------------------+-----------------------------+------------------------------------------------+--------------------------------------------+\n| Dockerfile                                         | Intel media stack origin    | Supported Intel GPUs                           | Host setup instructions                    |\n+====================================================+=============================+================================================+============================================+\n| `docker/ubuntu20.04/selfbuild-prodkmd/Dockerfile`_ | Self-built from open source | Alchemist, ATS-M                               | `Intel GPU DKMS \u003cdoc/intel-gpu-dkms.rst\u003e`_ |\n+----------------------------------------------------+-----------------------------+------------------------------------------------+--------------------------------------------+\n| `docker/ubuntu20.04/selfbuild/Dockerfile`_         | Self-built from open source | Gen8+ (legacy upstreamed platforms), such as   | Use any Linux distribution which           |\n|                                                    |                             | SKL, KBL, CFL, TGL, DG1, etc.                  | supports required platform                 |\n+----------------------------------------------------+-----------------------------+------------------------------------------------+--------------------------------------------+\n| `docker/ubuntu20.04/native/Dockerfile`_            | Ubuntu 20.04                | Gen8+, check Ubuntu 20.04 documentation        | Use any Linux distribution which           |\n|                                                    |                             |                                                | supports required platform                 |\n+----------------------------------------------------+-----------------------------+------------------------------------------------+--------------------------------------------+\n\n.. _docker/ubuntu20.04/selfbuild/Dockerfile: docker/ubuntu20.04/selfbuild/Dockerfile\n.. _docker/ubuntu20.04/selfbuild-prodkmd/Dockerfile: docker/ubuntu20.04/selfbuild-prodkmd/Dockerfile\n.. _docker/ubuntu20.04/native/Dockerfile: docker/ubuntu20.04/native/Dockerfile\n\nTo build any of the configurations, first clone Media Delivery repository::\n\n  git clone https://github.com/intel/media-delivery.git \u0026\u0026 cd media-delivery\n\nTo build configuration which targets DG2/ATS-M stack self-built from open source projects, run::\n\n  docker build \\\n    $(env | grep -E '(_proxy=|_PROXY)' | sed 's/^/--build-arg /') \\\n    --file docker/ubuntu20.04/selfbuild-prodkmd/Dockerfile \\\n    --tag intel-media-delivery .\n\nTo build configuration which targets Gen8+ legacy upstreamed platforms via stack\nself-built from open source projects, run::\n\n  docker build \\\n    $(env | grep -E '(_proxy=|_PROXY)' | sed 's/^/--build-arg /') \\\n    --file docker/ubuntu20.04/selfbuild/Dockerfile \\\n    --tag intel-media-delivery .\n\nRunning\n-------\n\nDocker containers provide isolated environments with configured software.\nTo access resources on a host system you need to add specific options when starting\ndocker containers. Overall, software included into media-delivery constainers\nrequires the following:\n\n* To access desired GPU you need to map it to the container, see ``--device`` option\n  below\n* To be able to access performance metrics, you need ``--cap-add SYS_ADMIN``\n* To access ngingx server (if you are running a demo), you need to forward ``8080``\n  port, see ``-p 8080:8080``\n\nSummarizing, start container as follows (``-v`` option maps a host folder\nto the container so you can copy transcoded streams back to the host)::\n\n  DEVICE=${DEVICE:-/dev/dri/renderD128}\n  DEVICE_GRP=$(stat --format %g $DEVICE)\n  mkdir -p /tmp/media-delivery \u0026\u0026 chmod -R 777 /tmp/media-delivery\n  docker run --rm -it -v /tmp/media-delivery:/opt/media-delivery \\\n    -e DEVICE=$DEVICE --device $DEVICE --group-add $DEVICE_GRP \\\n    --cap-add SYS_ADMIN \\\n    -p 8080:8080 \\\n    intel-media-delivery\n\nOnce inside a container you can run the included software and scripts. To start,\nwe recommend running simple `scripts \u003c./scripts/\u003e`_ which will showcase basic\ntranscoding capabilities. These scripts will download sample video clips, though\nyou can supply your own as a script argument if needed. If you work under proxy\ndo not forget to add it to your environment (via ``export https_proxy=\u003c...\u003e``).\n\n* Below commands will run single transcoding session (1080p or 4K) and produce\n  output files which you can copy to the host and review::\n\n    # AV1 to AV1:\n    ffmpeg-qsv-AV1-1080p.sh 1\n    ffmpeg-qsv-AV1-4K.sh 1\n    sample-multi-transcode-AV1-1080p.sh 1\n    sample-multi-transcode-AV1-4K.sh 1\n\n    # AVC to AVC:\n    ffmpeg-qsv-AVC-1080p.sh 1\n    ffmpeg-qsv-AVC-4K.sh 1\n    sample-multi-transcode-AVC-1080p.sh 1\n    sample-multi-transcode-AVC-4K.sh 1\n\n    # HEVC to HEVC\n    ffmpeg-qsv-HEVC-1080p.sh 1\n    ffmpeg-qsv-HEVC-4K.sh 1\n    sample-multi-transcode-HEVC-1080p.sh 1\n    sample-multi-transcode-HEVC-4K.sh 1\n\n* Below commands will run specified number of parallel transcoding sessions (1080p or 4K).\n  No output files will be produced, but you can check performance. Mind that below numbers\n  of parallel transcoding sessions are suggested for Intel® Data Center GPU Flex Series.\n  Other GPUs might support different number of sessions running at realtime::\n\n    # AV1 to AV1:\n    ffmpeg-qsv-AV1-1080p.sh 16\n    ffmpeg-qsv-AV1-4K.sh 4\n    sample-multi-transcode-AV1-1080p.sh 16\n    sample-multi-transcode-AV1-4K.sh 4\n\n    # AVC to AVC:\n    ffmpeg-qsv-AVC-1080p.sh 12\n    ffmpeg-qsv-AVC-4K.sh 2\n    sample-multi-transcode-AVC-1080p.sh 12\n    sample-multi-transcode-AVC-4K.sh 2\n\n    # HEVC to HEVC\n    ffmpeg-qsv-HEVC-1080p.sh 16\n    ffmpeg-qsv-HEVC-4K.sh 4\n    sample-multi-transcode-HEVC-1080p.sh 16\n    sample-multi-transcode-HEVC-4K.sh 4\n\nThese scripts run transcoding command lines which we recommend to use for best performance\nand quality in case of Random Access encoding. See `reference command lines \u003cdoc/reference-command-lines.rst\u003e`_\nfor details.\n\nIn addition to the simple scripts described above, this  project provides the following\nscripts and software which can be tried next:\n\n* `Quality and performance benchmark scripts \u003cdoc/benchmarking.rst\u003e`_\n* `CDN Demo \u003cdoc/demo.rst\u003e`_\n\nFor the more complex samples, check out `Open Visual Cloud \u003chttps://01.org/openvisualcloud\u003e`_ and\ntheir full scale `CDN Transcode Sample \u003chttps://github.com/OpenVisualCloud/CDN-Transcode-Sample\u003e`_.\n\nRunning 8K with Intel® Deep Link Hyper Encode\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIntel® Deep Link Hyper Encode Technology was designed to boost transcode performance to achieve \n8K60 real-time throughput.  Currently supported via Sample Multi Transcode application, this \ntechnology was tested on Intel® Data Center GPU Flex 140 card (2 GPU nodes). Media delivery \ncontainer can be used to check two different flavors of this approach:\n\n- 1 GPU node solution (encoder and decoder workloads are shared between 2 VDBOX engines of a \n  single GPU node and encoding is parallelized on a GOP level)\n- 2 GPU node solution (encoder and decoder workloads use 4 VDBOX engines of 2 GPU nodes and \n  encoding is parallelized on a GOP level)\n\nSimple example scripts are provided allowing for a quick test of 8k60 transcoding at the user’s \nend.  First, start docker with mapping **multiple GPU nodes** (mind ``--device /dev/dri`` vs\n``--device $DEVICE`` as in previous examples) as follows::\n\n  DEVICE=${DEVICE:-/dev/dri/renderD128}\n  DEVICE_GRP=$(stat --format %g $DEVICE)\n  mkdir -p /tmp/media-delivery \u0026\u0026 chmod -R 777 /tmp/media-delivery\n  docker run --rm -it -v /tmp/media-delivery:/opt/media-delivery \\\n    -e DEVICE=$DEVICE --device /dev/dri --group-add $DEVICE_GRP \\\n    --cap-add SYS_ADMIN \\\n    -p 8080:8080 \\\n    intel-media-delivery \n\nOnce inside the container, simple scripts can be used to showcase Intel® Deep Link Hyper \nEncode Technology. If you work behind a firewall, please add HTTPS proxy to your environment \n(via ``export https_proxy=\u003c...\u003e``) before running the scripts.\n\n* Use the following commands to run single 8K transcoding session using either of the two flavors::\n\n    # AV1 to AV1:\n    sample-multi-transcode-AV1-8K-hyperenc1gpu.sh\n    sample-multi-transcode-AV1-8K-hyperenc2gpu.sh\n\n    # HEVC to HEVC\n    sample-multi-transcode-HEVC-8K-hyperenc1gpu.sh\n    sample-multi-transcode-HEVC-8K-hyperenc2gpu.sh\n\nMore information on Intel® Deep Link Hyper Encode Technology can be found here:\n\n* `Accelerating Media Delivery with Intel® Data Center GPU Flex Series \u003cdoc/benchmarks/intel-data-center-gpu-flex-series/intel-data-center-gpu-flex-series.rst\u003e`_ \n\nContributing\n------------\n\nFeedback and contributions are welcome. Please, submit\n`issues \u003chttps://github.com/intel/media-delivery/issues\u003e`_ and\n`pull requests \u003chttps://github.com/intel/media-delivery/pulls\u003e`_ here at GitHub.\n\nDockerfiles should be supported as described in the `document \u003cdoc/docker.rst\u003e`_.\n\nIf changes are done to dockerfiles and/or scipts, please, make sure to\nrun `tests \u003ctests/readme.rst\u003e`_ before submitting pull requests.\n\nContent Attribution\n-------------------\n\nContainer image comes with some embedded content attributed as follows::\n\n  /opt/data/embedded/WAR_TRAILER_HiQ_10_withAudio.mp4:\n    Film: WAR - Courtesy \u0026 Copyright: Yash Raj Films Pvt. Ltd.\n\nInside the container, please, refer to the following file::\n\n  cat /opt/data/embedded/usage.txt\n\n","funding_links":[],"categories":["HarmonyOS"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fmedia-delivery","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintel%2Fmedia-delivery","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fmedia-delivery/lists"}