{"id":13460194,"url":"https://github.com/dmlc/decord","last_synced_at":"2025-05-14T00:04:48.113Z","repository":{"id":39058735,"uuid":"166483718","full_name":"dmlc/decord","owner":"dmlc","description":"An efficient video loader for deep learning with smart shuffling that's super easy to digest","archived":false,"fork":false,"pushed_at":"2024-07-17T04:18:40.000Z","size":20424,"stargazers_count":2102,"open_issues_count":194,"forks_count":175,"subscribers_count":29,"default_branch":"master","last_synced_at":"2025-04-09T02:11:12.241Z","etag":null,"topics":["video-loader"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dmlc.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-01-18T23:07:31.000Z","updated_at":"2025-04-08T02:45:22.000Z","dependencies_parsed_at":"2024-09-25T00:09:04.742Z","dependency_job_id":"aa0776ad-8697-4ace-8a97-bc9e15806cc7","html_url":"https://github.com/dmlc/decord","commit_stats":{"total_commits":342,"total_committers":14,"mean_commits":"24.428571428571427","dds":0.08771929824561409,"last_synced_commit":"d2e56190286ae394032a8141885f76d5372bd44b"},"previous_names":[],"tags_count":15,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmlc%2Fdecord","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmlc%2Fdecord/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmlc%2Fdecord/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dmlc%2Fdecord/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dmlc","download_url":"https://codeload.github.com/dmlc/decord/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254043299,"owners_count":22004917,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["video-loader"],"created_at":"2024-07-31T10:00:37.155Z","updated_at":"2025-05-14T00:04:48.045Z","avatar_url":"https://github.com/dmlc.png","language":"C++","readme":"# Decord\n\n![CI Build](https://github.com/dmlc/decord/workflows/C/C++%20CI/badge.svg?branch=master)\n![Release Build](https://github.com/dmlc/decord/workflows/Publish%20to%20PYPI/badge.svg?branch=master)\n[![PyPI](https://img.shields.io/pypi/v/decord.svg)](https://pypi.python.org/pypi/decord)\n[![Downloads](http://pepy.tech/badge/decord)](http://pepy.tech/project/decord)\n\n![symbol](docs/symbol.png)\n\n`Decord` is a reverse procedure of `Record`. It provides convenient video slicing methods based on a thin wrapper on top of hardware accelerated video decoders, e.g.\n\n-   FFMPEG/LibAV(Done)\n-   Nvidia Codecs(Done)\n-   Intel Codecs\n\n`Decord` was designed to handle awkward video shuffling experience in order to provide smooth experiences similar to random image loader for deep learning.\n\n`Decord` is also able to decode audio from both video and audio files. One can slice video and audio together to get a synchronized result; hence providing a one-stop solution for both video and audio decoding.\n\nTable of contents\n=================\n\n- [Benchmark](#preliminary-benchmark)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Bridge for Deep Learning frameworks](#bridges-for-deep-learning-frameworks)\n\n## Preliminary benchmark\n\nDecord is good at handling random access patterns, which is rather common during neural network training.\n\n![Speed up](https://user-images.githubusercontent.com/3307514/71223638-7199f300-2289-11ea-9e16-104038f94a55.png)\n\n## Installation\n\n### Install via pip\n\nSimply use\n\n```bash\npip install decord\n```\n\nSupported platforms:\n\n- [x] Linux\n- [x] Mac OS \u003e= 10.12, python\u003e=3.5\n- [x] Windows\n\n**Note that only CPU versions are provided with PYPI now. Please build from source to enable GPU acclerator.**\n\n\n### Install from source\n\n#### Linux\n\nInstall the system packages for building the shared library, for Debian/Ubuntu users, run:\n\n```bash\n# official PPA comes with ffmpeg 2.8, which lacks tons of features, we use ffmpeg 4.0 here\nsudo add-apt-repository ppa:jonathonf/ffmpeg-4 # for ubuntu20.04 official PPA is already version 4.2, you may skip this step\nsudo apt-get update\nsudo apt-get install -y build-essential python3-dev python3-setuptools make cmake\nsudo apt-get install -y ffmpeg libavcodec-dev libavfilter-dev libavformat-dev libavutil-dev\n# note: make sure you have cmake 3.8 or later, you can install from cmake official website if it's too old\n```\n\nClone the repo recursively(important)\n\n```bash\ngit clone --recursive https://github.com/dmlc/decord\n```\n\nBuild the shared library in source root directory:\n\n```bash\ncd decord\nmkdir build \u0026\u0026 cd build\ncmake .. -DUSE_CUDA=0 -DCMAKE_BUILD_TYPE=Release\nmake\n```\n\nyou can specify `-DUSE_CUDA=ON` or `-DUSE_CUDA=/path/to/cuda` or `-DUSE_CUDA=ON` `-DCMAKE_CUDA_COMPILER=/path/to/cuda/nvcc` to enable NVDEC hardware accelerated decoding:\n\n```bash\ncmake .. -DUSE_CUDA=ON -DCMAKE_BUILD_TYPE=Release\n```\n\nNote that if you encountered the an issue with `libnvcuvid.so` (e.g., see [#102](https://github.com/dmlc/decord/issues/102)), it's probably due to the missing link for\n`libnvcuvid.so`, you can manually find it (`ldconfig -p | grep libnvcuvid`) and link the library to `CUDA_TOOLKIT_ROOT_DIR\\lib64` to allow `decord` smoothly detect and link the correct library.\n\nTo specify a customized FFMPEG library path, use `-DFFMPEG_DIR=/path/to/ffmpeg\".\n\nInstall python bindings:\n\n```bash\ncd ../python\n# option 1: add python path to $PYTHONPATH, you will need to install numpy separately\npwd=$PWD\necho \"PYTHONPATH=$PYTHONPATH:$pwd\" \u003e\u003e ~/.bashrc\nsource ~/.bashrc\n# option 2: install with setuptools\npython3 setup.py install --user\n```\n\n#### Mac OS\n\nInstallation on macOS is similar to Linux. But macOS users need to install building tools like clang, GNU Make, cmake first.\n\nTools like clang and GNU Make are packaged in _Command Line Tools_ for macOS. To install:\n\n```bash\nxcode-select --install\n```\n\nTo install other needed packages like cmake, we recommend first installing Homebrew, which is a popular package manager for macOS. Detailed instructions can be found on its [homepage](https://brew.sh/).\n\nAfter installation of Homebrew, install cmake and ffmpeg by:\n\n```bash\nbrew install cmake ffmpeg\n# note: make sure you have cmake 3.8 or later, you can install from cmake official website if it's too old\n```\n\nClone the repo recursively(important)\n\n```bash\ngit clone --recursive https://github.com/dmlc/decord\n```\n\nThen go to root directory build shared library:\n\n```bash\ncd decord\nmkdir build \u0026\u0026 cd build\ncmake .. -DCMAKE_BUILD_TYPE=Release\nmake\n```\n\nInstall python bindings:\n\n```bash\ncd ../python\n# option 1: add python path to $PYTHONPATH, you will need to install numpy separately\npwd=$PWD\necho \"PYTHONPATH=$PYTHONPATH:$pwd\" \u003e\u003e ~/.bash_profile\nsource ~/.bash_profile\n# option 2: install with setuptools\npython3 setup.py install --user\n```\n\n#### Windows\n\nFor windows, you will need CMake and Visual Studio for C++ compilation.\n\n-   First, install `git`, `cmake`, `ffmpeg` and `python`. You can use [Chocolatey](https://chocolatey.org/) to manage packages similar to Linux/Mac OS.\n-   Second, install [`Visual Studio 2017 Community`](https://visualstudio.microsoft.com/), this my take some time.\n\nWhen dependencies are ready, open command line prompt:\n\n```bash\ncd your-workspace\ngit clone --recursive https://github.com/dmlc/decord\ncd decord\nmkdir build\ncd build\ncmake -DCMAKE_CXX_FLAGS=\"/DDECORD_EXPORTS\" -DCMAKE_CONFIGURATION_TYPES=\"Release\" -G \"Visual Studio 15 2017 Win64\" ..\n# open `decord.sln` and build project\n```\n\n## Usage\n\nDecord provides minimal API set for bootstraping. You can also check out jupyter notebook [examples](examples/).\n\n### VideoReader\n\nVideoReader is used to access frames directly from video files.\n\n```python\nfrom decord import VideoReader\nfrom decord import cpu, gpu\n\nvr = VideoReader('examples/flipping_a_pancake.mkv', ctx=cpu(0))\n# a file like object works as well, for in-memory decoding\nwith open('examples/flipping_a_pancake.mkv', 'rb') as f:\n  vr = VideoReader(f, ctx=cpu(0))\nprint('video frames:', len(vr))\n# 1. the simplest way is to directly access frames\nfor i in range(len(vr)):\n    # the video reader will handle seeking and skipping in the most efficient manner\n    frame = vr[i]\n    print(frame.shape)\n\n# To get multiple frames at once, use get_batch\n# this is the efficient way to obtain a long list of frames\nframes = vr.get_batch([1, 3, 5, 7, 9])\nprint(frames.shape)\n# (5, 240, 320, 3)\n# duplicate frame indices will be accepted and handled internally to avoid duplicate decoding\nframes2 = vr.get_batch([1, 2, 3, 2, 3, 4, 3, 4, 5]).asnumpy()\nprint(frames2.shape)\n# (9, 240, 320, 3)\n\n# 2. you can do cv2 style reading as well\n# skip 100 frames\nvr.skip_frames(100)\n# seek to start\nvr.seek(0)\nbatch = vr.next()\nprint('frame shape:', batch.shape)\nprint('numpy frames:', batch.asnumpy())\n\n```\n\n### VideoLoader\n\nVideoLoader is designed for training deep learning models with tons of video files.\nIt provides smart video shuffle techniques in order to provide high random access performance (We know that seeking in video is super slow and redundant).\nThe optimizations are underlying in the C++ code, which are invisible to user.\n\n```python\nfrom decord import VideoLoader\nfrom decord import cpu, gpu\n\nvl = VideoLoader(['1.mp4', '2.avi', '3.mpeg'], ctx=[cpu(0)], shape=(2, 320, 240, 3), interval=1, skip=5, shuffle=1)\nprint('Total batches:', len(vl))\n\nfor batch in vl:\n    print(batch[0].shape)\n```\n\nShuffling video can be tricky, thus we provide various modes:\n\n```python\nshuffle = -1  # smart shuffle mode, based on video properties, (not implemented yet)\nshuffle = 0  # all sequential, no seeking, following initial filename order\nshuffle = 1  # random filename order, no random access for each video, very efficient\nshuffle = 2  # random order\nshuffle = 3  # random frame access in each video only\n```\n\n### AudioReader\n\nAudioReader is used to access samples directly from both video(if there's an audio track) and audio files.\n\n```python\nfrom decord import AudioReader\nfrom decord import cpu, gpu\n\n# You can specify the desired sample rate and channel layout\n# For channels there are two options: default to the original layout or mono\nar = AudioReader('example.mp3', ctx=cpu(0), sample_rate=44100, mono=False)\nprint('Shape of audio samples: ', ar.shape())\n# To access the audio samples\nprint('The first sample: ', ar[0])\nprint('The first five samples: ', ar[0:5])\nprint('Get a batch of samples: ', ar.get_batch([1,3,5]))\n```\n\n### AVReader\n\nAVReader is a wraper for both AudioReader and VideoReader. It enables you to slice the video and audio simultaneously.\n\n```python\nfrom decord import AVReader\nfrom decord import cpu, gpu\n\nav = AVReader('example.mov', ctx=cpu(0))\n# To access both the video frames and corresponding audio samples\naudio, video = av[0:20]\n# Each element in audio will be a batch of samples corresponding to a frame of video\nprint('Frame #: ', len(audio))\nprint('Shape of the audio samples of the first frame: ', audio[0].shape)\nprint('Shape of the first frame: ', video.asnumpy()[0].shape)\n# Similarly, to get a batch\naudio2, video2 = av.get_batch([1,3,5])\n```\n\n\n\n## Bridges for deep learning frameworks:\n\nIt's important to have a bridge from decord to popular deep learning frameworks for training/inference\n\n-   Apache MXNet (Done)\n-   Pytorch (Done)\n-   TensorFlow (Done)\n\nUsing bridges for deep learning frameworks are simple, for example, one can set the default tensor output to `mxnet.ndarray`:\n\n```python\nimport decord\nvr = decord.VideoReader('examples/flipping_a_pancake.mkv')\nprint('native output:', type(vr[0]), vr[0].shape)\n# native output: \u003cclass 'decord.ndarray.NDArray'\u003e, (240, 426, 3)\n# you only need to set the output type once\ndecord.bridge.set_bridge('mxnet')\nprint(type(vr[0], vr[0].shape))\n# \u003cclass 'mxnet.ndarray.ndarray.NDArray'\u003e (240, 426, 3)\n# or pytorch and tensorflow(\u003e=2.2.0)\ndecord.bridge.set_bridge('torch')\ndecord.bridge.set_bridge('tensorflow')\n# or back to decord native format\ndecord.bridge.set_bridge('native')\n```\n","funding_links":[],"categories":["C++","Computer Vision","Action Recognition and Video Understanding","Video Processing","语言资源库","Tools and Utilities"],"sub_categories":["Others","Video Representation","python","Surveys"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdmlc%2Fdecord","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdmlc%2Fdecord","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdmlc%2Fdecord/lists"}