Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/materight/pyav-cuda
Extension of PyAV (ffmpeg bindings) with hardware decoding support. Compatible with PyTorch and Nvidia codecs.
https://github.com/materight/pyav-cuda
cuda cuvid ffmpeg libav pytorch
Last synced: 3 months ago
JSON representation
Extension of PyAV (ffmpeg bindings) with hardware decoding support. Compatible with PyTorch and Nvidia codecs.
- Host: GitHub
- URL: https://github.com/materight/pyav-cuda
- Owner: materight
- License: mit
- Created: 2024-07-15T13:44:50.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-09-10T17:06:31.000Z (5 months ago)
- Last Synced: 2024-09-30T18:42:49.452Z (4 months ago)
- Topics: cuda, cuvid, ffmpeg, libav, pytorch
- Language: Python
- Homepage:
- Size: 55.7 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# PyAV-CUDA
[![PyPI version](https://img.shields.io/pypi/v/avcuda)](https://pypi.org/project/avcuda/)**PyAV-CUDA** is an extension of [PyAV](https://github.com/PyAV-Org/PyAV) that adds support for hardware-accelerated video decoding using Nvidia GPUs. It integrates with FFmpeg and PyTorch, providing CUDA-accelerated kernels for efficient color space conversion.
## Installation
1. Build and install FFmpeg with [hardware acceleration support](https://pytorch.org/audio/stable/build.ffmpeg.html).
2. To enable hardware acceleration in PyAV, it needs to be reinstalled from source. Assuming FFmpeg is installed in `/opt/ffmpeg`, run:
```bash
pip uninstall av
PKG_CONFIG_LIBDIR="/opt/ffmpeg/lib/pkgconfig" pip install av --no-binary av --no-cache
```
If the installation was successful, `h264_cuvid` should appear between the available codecs:
```python
import av
print(av.codecs_available)
```3. Install PyAV-CUDA:
```bash
PKG_CONFIG_LIBDIR="/opt/ffmpeg/lib/pkgconfig" CUDA_HOME="/usr/local/cuda" pip install avcuda
```4. Test the installation by running `python examples/benchmark.py`. The output should show something like:
```
Running CPU decoding... took 34.99s
Running GPU decoding... took 8.30s
```## Usage
To use hardware decoding, instantiate an `HWDeviceContext` and attach it to a `VideoStream`. Note that an `HWDeviceContext` can be shared by multiple `VideoStream` instances to save memory.
```python
import av
import avcudaCUDA_DEVICE = 0
with (
av.open("video.mp4") as container,
avcuda.HWDeviceContext(CUDA_DEVICE) as hwdevice_ctx,
):
stream = container.streams.video[0]
hwdevice_ctx.attach(stream.codec_context)# Convert frames into RGB PyTorch tensors on the same device
for frame in container.decode(stream):
frame_tensor = hwdevice_ctx.to_tensor(frame)
```