{"id":18994056,"url":"https://github.com/fujiawei-dev/ffmpeg-generator","last_synced_at":"2025-04-15T11:17:29.094Z","repository":{"id":48484251,"uuid":"364142315","full_name":"fujiawei-dev/ffmpeg-generator","owner":"fujiawei-dev","description":"Python bindings for FFmpeg - with almost all filters support, even `gltransition` filter.","archived":false,"fork":false,"pushed_at":"2023-11-17T19:42:38.000Z","size":220,"stargazers_count":61,"open_issues_count":4,"forks_count":13,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-04-15T11:16:58.680Z","etag":null,"topics":["ffmpeg","ffmpeg-command","ffmpeg-python","ffmpeg-wrapper","ffplay","ffprobe","python-ffmpeg"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/fujiawei-dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-05-04T04:48:45.000Z","updated_at":"2025-01-24T18:34:08.000Z","dependencies_parsed_at":"2024-11-08T17:30:46.390Z","dependency_job_id":"abaf58a9-34d2-4407-9015-a60259d7476a","html_url":"https://github.com/fujiawei-dev/ffmpeg-generator","commit_stats":{"total_commits":13,"total_committers":1,"mean_commits":13.0,"dds":0.0,"last_synced_commit":"8365f5face673222afa51ce55dde7b26deeb6071"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fujiawei-dev%2Fffmpeg-generator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fujiawei-dev%2Fffmpeg-generator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fujiawei-dev%2Fffmpeg-generator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fujiawei-dev%2Fffmpeg-generator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/fujiawei-dev","download_url":"https://codeload.github.com/fujiawei-dev/ffmpeg-generator/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249058386,"owners_count":21205911,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ffmpeg","ffmpeg-command","ffmpeg-python","ffmpeg-wrapper","ffplay","ffprobe","python-ffmpeg"],"created_at":"2024-11-08T17:24:02.685Z","updated_at":"2025-04-15T11:17:29.060Z","avatar_url":"https://github.com/fujiawei-dev.png","language":"Python","readme":"\u003c!--\r\n * @Date: 2021.02.30 17:30:51\r\n * @LastEditors: Rustle Karl\r\n * @LastEditTime: 2021.05.25 08:27:32\r\n--\u003e\r\n\r\n# FFmpeg Generator\r\n\r\n\u003e A FFmpeg command generator and actuator.\r\n\r\nPython bindings for FFmpeg - with almost all filters support, even `gltransition` filter.\r\n\r\n- [FFmpeg Generator](#ffmpeg-generator)\r\n  - [Overview](#overview)\r\n  - [TODO](#todo)\r\n  - [Installation](#installation)\r\n  - [Documents](#documents)\r\n  - [GLTransition Filter](#gltransition-filter)\r\n  - [Video Sources](#video-sources)\r\n    - [Play by FFplay](#play-by-ffplay)\r\n    - [Preview by FFmpeg](#preview-by-ffmpeg)\r\n    - [Save Video from video sources](#save-video-from-video-sources)\r\n  - [More Examples](#more-examples)\r\n    - [Get Stream Info](#get-stream-info)\r\n    - [Play a Video](#play-a-video)\r\n    - [Generate Thumbnail for Video](#generate-thumbnail-for-video)\r\n    - [Convert Video to Numpy Array](#convert-video-to-numpy-array)\r\n    - [Read Single Video Frame as JPEG](#read-single-video-frame-as-jpeg)\r\n    - [Convert Sound to Raw PCM Audio](#convert-sound-to-raw-pcm-audio)\r\n    - [Assemble Video from Sequence of Frames](#assemble-video-from-sequence-of-frames)\r\n    - [Audio/Video Pipeline](#audiovideo-pipeline)\r\n    - [Mono to Stereo with Offsets and Video](#mono-to-stereo-with-offsets-and-video)\r\n    - [Process Frames](#process-frames)\r\n    - [FaceTime Webcam Input](#facetime-webcam-input)\r\n    - [Stream from a Local Video to HTTP Server](#stream-from-a-local-video-to-http-server)\r\n    - [Stream from RTSP Server to TCP Socket](#stream-from-rtsp-server-to-tcp-socket)\r\n  - [Special Thanks](#special-thanks)\r\n\r\n## Overview\r\n\r\nThis project is based on [`ffmpeg-python`](https://github.com/kkroening/ffmpeg-python). But rewrite all.\r\n\r\n- support video sources\r\n- support almost all filters\r\n- support FFplay\u0026FFprobe\r\n- enable cuda hwaccel by default, or close globally by code below\r\n\r\n```python\r\nfrom ffmpeg import settings\r\n\r\nsettings.CUDA_ENABLE = False\r\n```\r\n\r\n## Installation\r\n\r\n```shell\r\npip install -U ffmpeg-generator\r\n```\r\n\r\n## Documents\r\n\r\nFFmpeg comes with more than 450 audio and video media filters. \r\nIt is recommended to read the official documentation.\r\n\r\n- [FFmpeg Homepage](https://ffmpeg.org/)\r\n- [FFmpeg Documentation](https://ffmpeg.org/ffmpeg.html)\r\n- [FFmpeg Filters Documentation](https://ffmpeg.org/ffmpeg-filters.html)\r\n\r\nOr read my study notes, plan to demonstrate all the filters, but written in Chinese. Not all done yet.\r\n\r\n- [All Examples for Audio Filters](docs/afilters.md)\r\n- [All Examples for Video Filters](docs/vfilters.md)\r\n- [All Examples for Audio/Video Sources](docs/sources.md)\r\n- [All Examples for Media Filters](docs/avfilters.md)\r\n- [Introduce Usage of FFplay](docs/ffplay.md)\r\n- [More Notes](https://github.com/studying-notes/ffmpeg-notes)\r\n\r\n## GLTransition Filter\r\n\r\n```python\r\nfrom ffmpeg import avfilters, input, vfilters, vtools\r\nfrom ffmpeg.transitions import GLTransition, GLTransitionAll\r\nfrom tests import data\r\n\r\n# OpenGL Transition\r\n\r\n\"\"\"Combine two videos with transition effects.\"\"\"\r\n\r\nfor e in GLTransitionAll:\r\n    vtools.concat_2_videos_with_gltransition(data.TEST_OUTPUTS_DIR / (e + \".mp4\"),\r\n                                             data.SHORT0, data.SHORT1, offset=1,\r\n                                             duration=2, source=eval(\"transitions.\" + e))\r\n\r\n\"\"\"Combine multiple videos with transition effects.\"\"\"\r\n\r\nin0 = input(data.SHORT0).video\r\nin1 = input(data.SHORT1).video\r\nin2 = input(data.SHORT2).video\r\n\r\nin0_split = in0.split()\r\nin0_0, in0_1 = in0_split[0], in0_split[1]\r\nin0_0 = in0_0.trim(start=0, end=3)\r\nin0_1 = in0_1.trim(start=3, end=4).setpts()\r\n\r\nin1_split = in1.split()\r\nin1_0, in1_1 = in1_split[0], in1_split[1]\r\nin1_0 = in1_0.trim(start=0, end=3)\r\nin1_1 = in1_1.trim(start=3, end=4).setpts()\r\n\r\nin2_split = in2.split()\r\nin2_0, in2_1 = in2_split[0], in2_split[1]\r\nin2_0 = in2_0.trim(start=0, end=3)\r\nin2_1 = in2_1.trim(start=3, end=4).setpts()\r\n\r\ngl0_1 = vfilters.gltransition(in0_1, in1_0, source=GLTransition.Angular)\r\ngl1_2 = vfilters.gltransition(in1_1, in2_0, source=GLTransition.ButterflyWaveScrawler)\r\n\r\n# transition\r\n_ = avfilters.concat(in0_0, gl0_1, gl1_2, in2_1).output(\r\n        data.TEST_OUTPUTS_DIR / \"3_transition.mp4\",\r\n        vcodec=\"libx264\",\r\n        v_profile=\"baseline\",\r\n        preset=\"slow\",\r\n        movflags=\"faststart\",\r\n        pixel_format=\"yuv420p\",\r\n).run()\r\n\r\n# transition + image watermark\r\nv_input = avfilters.concat(in0_0, gl0_1, gl1_2, in2_1)\r\ni_input = input(data.I1).scale(w=100, h=100)\r\nv_input.overlay(i_input, x=30, y=30).output(\r\n        data.TEST_OUTPUTS_DIR / \"3_transition_image.mp4\",\r\n        vcodec=\"libx264\",\r\n        v_profile=\"baseline\",\r\n        preset=\"slow\",\r\n        movflags=\"faststart\",\r\n        pixel_format=\"yuv420p\",\r\n).run()\r\n\r\n# transition + image watermark + text watermark\r\nv_input = avfilters.concat(in0_0, gl0_1, gl1_2, in2_1). \\\r\n    drawtext(text=\"Watermark\", x=150, y=150, fontsize=36, fontfile=data.FONT1)\r\ni_input = input(data.I1).scale(w=100, h=100)\r\nv_input.overlay(i_input, x=30, y=30).output(\r\n        data.TEST_OUTPUTS_DIR / \"3_transition_image_text.mp4\",\r\n        vcodec=\"libx264\",\r\n        v_profile=\"baseline\",\r\n        preset=\"slow\",\r\n        movflags=\"faststart\",\r\n        pixel_format=\"yuv420p\",\r\n).run()\r\n\r\n# transition + image watermark + text watermark + music\r\nv_input = avfilters.concat(in0_0, gl0_1, gl1_2, in2_1). \\\r\n    drawtext(text=\"Watermark\", x=150, y=150, fontsize=36, fontfile=data.FONT1)\r\ni_input = input(data.I1).scale(w=100, h=100)\r\na_input = input(data.A1).audio\r\nv_input.overlay(i_input, x=30, y=30).output(\r\n        a_input,\r\n        data.TEST_OUTPUTS_DIR / \"3_transition_image_text_music.mp4\",\r\n        acodec=\"copy\",\r\n        vcodec=\"libx264\",\r\n        v_profile=\"baseline\",\r\n        shortest=True,\r\n        preset=\"slow\",\r\n        movflags=\"faststart\",\r\n        pixel_format=\"yuv420p\",\r\n).run()\r\n```\r\n\r\n## Video Sources\r\n\r\n### Play by FFplay\r\n\r\n```python\r\nfrom ffmpeg import run_ffplay\r\n\r\n_ = run_ffplay(\"allrgb\", f=\"lavfi\")\r\n_ = run_ffplay(\"allyuv\", f=\"lavfi\")\r\n_ = run_ffplay(\"color=c=red@0.2:s=1600x900:r=10\", f=\"lavfi\")\r\n_ = run_ffplay(\"haldclutsrc\", f=\"lavfi\")\r\n_ = run_ffplay(\"pal75bars\", f=\"lavfi\")\r\n_ = run_ffplay(\"allyuv\", f=\"lavfi\")\r\n_ = run_ffplay(\"allyuv\", f=\"lavfi\")\r\n_ = run_ffplay(\"rgbtestsrc=size=900x600:rate=60\", f=\"lavfi\")\r\n_ = run_ffplay(\"smptebars=size=900x600:rate=60\", f=\"lavfi\")\r\n_ = run_ffplay(\"smptehdbars=size=900x600:rate=60\", f=\"lavfi\")\r\n_ = run_ffplay(\"testsrc=size=900x600:rate=60\", f=\"lavfi\")\r\n_ = run_ffplay(\"testsrc2=s=900x600:rate=60\", f=\"lavfi\")\r\n_ = run_ffplay(\"yuvtestsrc=s=900x600:rate=60\", f=\"lavfi\")\r\n```\r\n\r\n### Preview by FFmpeg\r\n\r\n```python\r\nfrom ffmpeg import input_source\r\n\r\n_ = input_source(\"testsrc\", size=\"900x600\", rate=60).output(preview=True).run_async()\r\n_ = input_source(\"testsrc2\", size=\"900x600\", rate=60).output(preview=True).run_async()\r\n```\r\n\r\n### Save Video from video sources\r\n\r\n```python\r\nfrom ffmpeg import input_source\r\n\r\n_ = input_source(\"testsrc\", size=\"900x600\", rate=60, duration=30).output(\"source_testsrc.mp4\").run()\r\n```\r\n\r\n## More Examples\r\n\r\n### Get Stream Info\r\n\r\n```python\r\nfrom ffmpeg import FFprobe\r\n\r\nmeta = FFprobe(\"path/to/file\")\r\n\r\n# all stream\r\nprint(meta.metadata)\r\n\r\n# video stream\r\nprint(meta.video)\r\nprint(meta.video_duration)\r\nprint(meta.video_scale)\r\n\r\n# audio stream\r\nprint(meta.audio)\r\nprint(meta.audio_duration)\r\n```\r\n\r\n### Play a Video\r\n\r\n```python\r\nfrom ffmpeg import ffplay_video\r\nfrom tests import data\r\n\r\nffplay_video(data.V1, vf='transpose=1')\r\nffplay_video(data.V1, vf='hflip')\r\nffplay_video(data.V1, af='atempo=2')\r\nffplay_video(data.V1, vf='setpts=PTS/2')\r\nffplay_video(data.V1, vf='transpose=1,setpts=PTS/2', af='atempo=2')\r\n```\r\n\r\n### Generate Thumbnail for Video\r\n\r\n```python\r\nfrom ffmpeg import vtools\r\n\r\nvtools.generate_video_thumbnail(src=\"src\", dst=\"dst\", start_position=3, width=400, height=-1)\r\n```\r\n\r\n### Convert Video to Numpy Array\r\n\r\n```python\r\nfrom ffmpeg import vtools\r\n\r\nvtools.convert_video_to_np_array(src=\"src\")\r\n```\r\n\r\n### Read Single Video Frame as JPEG\r\n\r\n```python\r\nfrom ffmpeg import vtools\r\n\r\nvtools.read_frame_as_jpeg(src=\"src\", frame=10)\r\n```\r\n\r\n### Convert Sound to Raw PCM Audio\r\n\r\n```python\r\nfrom ffmpeg import atools\r\n\r\naudio = '/path/to/audio.m4a'\r\ndst = '/path/to/dst.pcm'\r\n\r\natools.convert_audio_to_raw_pcm(src=audio, dst=None)\r\natools.convert_audio_to_raw_pcm(src=audio, dst=dst)\r\n```\r\n\r\n### Assemble Video from Sequence of Frames\r\n\r\n```python\r\nfrom ffmpeg import vtools\r\n\r\n# on Linux\r\nvtools.assemble_video_from_images('/path/to/jpegs/*.jpg', pattern_type='glob', frame_rate=25)\r\n\r\n# on Windows\r\nvtools.assemble_video_from_images('/path/to/jpegs/%02d.jpg', pattern_type=None, frame_rate=25)\r\n```\r\n\r\n\u003e https://stackoverflow.com/questions/31201164/ffmpeg-error-pattern-type-glob-was-selected-but-globbing-is-not-support-ed-by\r\n\r\nWith additional filtering:\r\n\r\n```python\r\nimport ffmpeg\r\n\r\nffmpeg.input('/path/to/jpegs/*.jpg', pattern_type='glob', framerate=25). \\\r\n    filter('deflicker', mode='pm', size=10). \\\r\n    filter('scale', size='hd1080', force_original_aspect_ratio='increase'). \\\r\n    output('movie.mp4', crf=20, preset='slower', movflags='faststart', pix_fmt='yuv420p'). \\\r\n    view(save_path='filter_graph').run()\r\n```\r\n\r\n### Audio/Video Pipeline\r\n\r\n```python\r\nimport ffmpeg\r\nfrom ffmpeg import avfilters\r\n\r\nin1 = ffmpeg.input(\"input.mp4\")\r\nin2 = ffmpeg.input(\"input.mp4\")\r\n\r\nv1 = in1.video.hflip()\r\na1 = in2.audio\r\n\r\nv2 = in2.video.reverse().hue(s=0)\r\na2 = in2.audio.areverse().aphaser()\r\n\r\njoined = avfilters.concat(v1, a1, v2, a2, v=1, a=1).Node\r\n\r\nv3 = joined[0]\r\na3 = joined[1].volume(0.8)\r\n\r\nv3.output(a3, 'v1_v2_pipeline.mp4').run()\r\n```\r\n\r\n### Mono to Stereo with Offsets and Video\r\n\r\n```python\r\nimport ffmpeg\r\nfrom ffmpeg import afilters\r\nfrom tests import data\r\n\r\ninput_video = ffmpeg.input(data.V1)\r\naudio_left = ffmpeg.input(data.A1).atrim(start=15).asetpts(\"PTS-STARTPTS\")\r\naudio_right = ffmpeg.input(data.A1).atrim(start=10).asetpts(\"PTS-STARTPTS\")\r\n\r\nafilters.join(audio_left, audio_right, inputs=2, channel_layout=\"stereo\"). \\\r\n    output(input_video.video, \"stereo_video.mp4\", shortest=None, vcodec=\"copy\").run()\r\n```\r\n\r\n### Process Frames\r\n\r\n- Decode input video with ffmpeg\r\n- Process each video frame with python\r\n- Encode output video with ffmpeg\r\n\r\n```python\r\nimport subprocess\r\n\r\nimport numpy as np\r\n\r\nfrom ffmpeg import constants, FFprobe, input, settings\r\nfrom tests import data\r\n\r\nsettings.CUDA_ENABLE = False\r\n\r\n\r\ndef ffmpeg_input_process(src):\r\n    return input(src).output(constants.PIPE, format=\"rawvideo\",\r\n                             pixel_format=\"rgb24\").run_async(pipe_stdout=True)\r\n\r\n\r\ndef ffmpeg_output_process(dst, width, height):\r\n    return input(constants.PIPE, format=\"rawvideo\", pixel_format=\"rgb24\",\r\n                 width=width, height=height).output(dst, pixel_format=\"yuv420p\"). \\\r\n        run_async(pipe_stdin=True)\r\n\r\n\r\ndef read_frame_from_stdout(process: subprocess.Popen, width, height):\r\n    frame_size = width * height * 3\r\n    input_bytes = process.stdout.read(frame_size)\r\n\r\n    if not input_bytes:\r\n        return\r\n\r\n    assert len(input_bytes) == frame_size\r\n\r\n    return np.frombuffer(input_bytes, np.uint8).reshape([height, width, 3])\r\n\r\n\r\ndef process_frame_simple(frame):\r\n    # deep dream\r\n    return frame * 0.3\r\n\r\n\r\ndef write_frame_to_stdin(process: subprocess.Popen, frame):\r\n    process.stdin.write(frame.astype(np.uint8).tobytes())\r\n\r\n\r\ndef run(src, dst, process_frame):\r\n    width, height = FFprobe(src).video_scale\r\n\r\n    input_process = ffmpeg_input_process(src)\r\n    output_process = ffmpeg_output_process(dst, width, height)\r\n\r\n    while True:\r\n        input_frame = read_frame_from_stdout(input_process, width, height)\r\n\r\n        if input_frame is None:\r\n            break\r\n\r\n        write_frame_to_stdin(output_process, process_frame(input_frame))\r\n\r\n    input_process.wait()\r\n\r\n    output_process.stdin.close()\r\n    output_process.wait()\r\n\r\n\r\nif __name__ == '__main__':\r\n    run(data.SHORT0, data.TEST_OUTPUTS_DIR / \"process_frame.mp4\", process_frame_simple)\r\n```\r\n\r\n### FaceTime Webcam Input\r\n\r\n```python\r\nimport ffmpeg\r\n\r\ndef facetime():\r\n    ffmpeg.input(\"FaceTime\", format=\"avfoundation\",\r\n                 pixel_format=\"uyvy422\", framerate=30). \\\r\n        output(\"facetime.mp4\", pixel_format=\"yuv420p\", frame_size=100).run()\r\n```\r\n\r\n### Stream from a Local Video to HTTP Server\r\n\r\n```python\r\nfrom ffmpeg import input\r\n\r\ninput(\"video.mp4\").output(\"http://127.0.0.1:8080\",\r\n                      codec=\"copy\",  # use same codecs of the original video\r\n                      listen=1,  # enables HTTP server\r\n                      f=\"flv\").\\\r\n    with_global_args(\"-re\").\\\r\n    run()  # argument to act as a live stream\r\n```\r\n\r\nTo receive the video you can use ffplay in the terminal:\r\n\r\n```shell\r\nffplay -f flv http://localhost:8080\r\n```\r\n\r\n### Stream from RTSP Server to TCP Socket\r\n\r\n```python\r\nimport socket\r\nfrom ffmpeg import input\r\n\r\nserver = socket.socket()\r\nprocess = input('rtsp://%s:8554/default').\\\r\n    output('-', format='h264').\\\r\n    run_async(pipe_stdout=True)\r\n\r\nwhile process.poll() is None:\r\n    packet = process.stdout.read(4096)\r\n    try:\r\n        server.send(packet)\r\n    except socket.error:\r\n        process.stdout.close()\r\n        process.wait()\r\n        break\r\n```\r\n\r\n## Special Thanks\r\n\r\n- [The FFmpeg-Python Project](https://github.com/kkroening/ffmpeg-python)\r\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffujiawei-dev%2Fffmpeg-generator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffujiawei-dev%2Fffmpeg-generator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffujiawei-dev%2Fffmpeg-generator/lists"}