{"id":14977681,"url":"https://github.com/adamspannbauer/python_video_stab","last_synced_at":"2025-04-08T11:08:05.760Z","repository":{"id":47135362,"uuid":"106559498","full_name":"AdamSpannbauer/python_video_stab","owner":"AdamSpannbauer","description":"A Python package to stabilize videos using OpenCV","archived":false,"fork":false,"pushed_at":"2023-07-19T22:27:44.000Z","size":446077,"stargazers_count":720,"open_issues_count":20,"forks_count":122,"subscribers_count":20,"default_branch":"master","last_synced_at":"2025-04-08T11:07:59.338Z","etag":null,"topics":["computer-vision","opencv","python","video","video-stabilization"],"latest_commit_sha":null,"homepage":"https://adamspannbauer.github.io/python_video_stab/html/index.html","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AdamSpannbauer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-10-11T13:42:07.000Z","updated_at":"2025-04-08T10:13:33.000Z","dependencies_parsed_at":"2024-06-18T23:11:10.322Z","dependency_job_id":"dbe5fd3b-0584-41be-82d0-e76005921550","html_url":"https://github.com/AdamSpannbauer/python_video_stab","commit_stats":{"total_commits":298,"total_committers":5,"mean_commits":59.6,"dds":0.07718120805369133,"last_synced_commit":"e005882727939749a91e6e327a3a0fae7d09d5d4"},"previous_names":[],"tags_count":18,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AdamSpannbauer%2Fpython_video_stab","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AdamSpannbauer%2Fpython_video_stab/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AdamSpannbauer%2Fpython_video_stab/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AdamSpannbauer%2Fpython_video_stab/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AdamSpannbauer","download_url":"https://codeload.github.com/AdamSpannbauer/python_video_stab/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247829491,"owners_count":21002995,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","opencv","python","video","video-stabilization"],"created_at":"2024-09-24T13:56:07.984Z","updated_at":"2025-04-08T11:08:05.732Z","avatar_url":"https://github.com/AdamSpannbauer.png","language":"Python","readme":"# Python Video Stabilization \u003cimg src='https://s3.amazonaws.com/python-vidstab/logo/vidstab_logo_hex.png' width=125 align='right'/\u003e\n\n\u003c!-- noop --\u003e\n\n[![Build Status](https://travis-ci.org/AdamSpannbauer/python_video_stab.svg?branch=master)](https://travis-ci.org/AdamSpannbauer/python_video_stab)\n[![codecov](https://codecov.io/gh/AdamSpannbauer/python_video_stab/branch/master/graph/badge.svg)](https://codecov.io/gh/AdamSpannbauer/python_video_stab)\n[![Maintainability](https://api.codeclimate.com/v1/badges/f3a17d211a2a437d21b1/maintainability)](https://codeclimate.com/github/AdamSpannbauer/python_video_stab/maintainability)\n[![PyPi version](https://pypip.in/v/vidstab/badge.png)](https://pypi.org/project/vidstab/)\n[![Last Commit](https://img.shields.io/github/last-commit/AdamSpannbauer/python_video_stab.svg)](https://github.com/AdamSpannbauer/python_video_stab/commits/master)\n[![Downloads](https://pepy.tech/badge/vidstab)](https://pepy.tech/project/vidstab)\n\n Python video stabilization using OpenCV. Full [searchable documentation here](https://adamspannbauer.github.io/python_video_stab).\n \n This module contains a single class (`VidStab`) used for video stabilization. This class is based on the work presented by Nghia Ho in [SIMPLE VIDEO STABILIZATION USING OPENCV](http://nghiaho.com/?p=2093). The foundation code was found in a comment on Nghia Ho's post by the commenter with username koala.\n \n Input                           |  Output\n:-------------------------------:|:-------------------------:\n![](https://s3.amazonaws.com/python-vidstab/readme/input_ostrich.gif)    |  ![](https://s3.amazonaws.com/python-vidstab/readme/stable_ostrich.gif)\n \n*[Video](https://www.youtube.com/watch?v=9pypPqbV_GM) used with permission from [HappyLiving](https://www.facebook.com/happylivinginfl/)*\n\n## Contents:\n1. [Installation](#installation)\n   * [Install `vidstab` without installing OpenCV](#install-vidstab-without-installing-opencv)\n   * [Install vidstab \u0026 OpenCV](#install-vidstab-opencv)   \n2. [Basic Usage](#basic-usage)\n   * [Using from command line](#using-from-command-line)\n   * [Using VidStab class](#using-vidstab-class)\n3. [Advanced Usage](#advanced-usage)\n   * [Plotting frame to frame transformations](#plotting-frame-to-frame-transformations)\n   * [Using borders](#using-borders)\n   * [Using Frame Layering](#using-frame-layering)\n   * [Stabilizing a frame at a time](#stabilizing-a-frame-at-a-time)\n   * [Working with live video](#working-with-live-video)\n   * [Transform File Writing \u0026 Reading](#transform-file-writing--reading)\n\n## Installation\n\n\u003e ```diff\n\u003e + Please report issues if you install/try to install and run into problems!\n\u003e ```\n\n### Install `vidstab` without installing OpenCV\n\nIf you've already built OpenCV with python bindings on your machine it is recommended to install `vidstab` without installing the pypi versions of OpenCV.  The `opencv-python` python module can cause issues if you've already built OpenCV from source in your environment.\n\nThe below commands will install `vidstab` without OpenCV included.\n\n#### From PyPi\n\n```bash\npip install vidstab\n```\n\n#### From GitHub\n\n```bash\npip install git+https://github.com/AdamSpannbauer/python_video_stab.git\n```\n\n### Install `vidstab` \u0026 OpenCV\n\nIf you don't have OpenCV installed already there are a couple options.  \n\n1. You can build OpenCV using one of the great online tutorials from [PyImageSearch](https://www.pyimagesearch.com/), [LearnOpenCV](https://www.learnopencv.com/), or [OpenCV](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_setup/py_table_of_contents_setup/py_table_of_contents_setup.html#py-table-of-content-setup) themselves.  When building from source you have more options (e.g. [platform optimization](https://www.pyimagesearch.com/2017/10/09/optimizing-opencv-on-the-raspberry-pi/)), but more responsibility.  Once installed you can use the pip install command shown above.\n2. You can install a pre-built distribution of OpenCV from pypi as a dependency for `vidstab` (see command below)\n\nThe below commands will install `vidstab` with `opencv-contrib-python` as dependencies.\n\n#### From PyPi\n\n```bash\npip install vidstab[cv2]\n```\n\n#### From Github\n\n```bash\n pip install -e git+https://github.com/AdamSpannbauer/python_video_stab.git#egg=vidstab[cv2]\n```\n\n## Basic usage\n\nThe `VidStab` class can be used as a command line script or in your own custom python code.\n\n### Using from command line\n\n```bash\n# Using defaults\npython3 -m vidstab --input input_video.mov --output stable_video.avi\n```\n\n```bash\n# Using a specific keypoint detector\npython3 -m vidstab -i input_video.mov -o stable_video.avi -k GFTT\n```\n\n### Using `VidStab` class\n\n```python\nfrom vidstab import VidStab\n\n# Using defaults\nstabilizer = VidStab()\nstabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')\n\n# Using a specific keypoint detector\nstabilizer = VidStab(kp_method='ORB')\nstabilizer.stabilize(input_path='input_video.mp4', output_path='stable_video.avi')\n\n# Using a specific keypoint detector and customizing keypoint parameters\nstabilizer = VidStab(kp_method='FAST', threshold=42, nonmaxSuppression=False)\nstabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')\n```\n\n## Advanced usage\n\n### Plotting frame to frame transformations\n\n```python\nfrom vidstab import VidStab\nimport matplotlib.pyplot as plt\n\nstabilizer = VidStab()\nstabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')\n\nstabilizer.plot_trajectory()\nplt.show()\n\nstabilizer.plot_transforms()\nplt.show()\n```\n\nTrajectories                     |  Transforms\n:-------------------------------:|:-------------------------:\n![](https://s3.amazonaws.com/python-vidstab/readme/trajectory_plot.png)  |  ![](https://s3.amazonaws.com/python-vidstab/readme/transforms_plot.png)\n\n### Using borders\n\n```python\nfrom vidstab import VidStab\n\nstabilizer = VidStab()\n\n# black borders\nstabilizer.stabilize(input_path='input_video.mov', \n                     output_path='stable_video.avi', \n                     border_type='black')\nstabilizer.stabilize(input_path='input_video.mov', \n                     output_path='wide_stable_video.avi', \n                     border_type='black', \n                     border_size=100)\n\n# filled in borders\nstabilizer.stabilize(input_path='input_video.mov', \n                     output_path='ref_stable_video.avi', \n                     border_type='reflect')\nstabilizer.stabilize(input_path='input_video.mov', \n                     output_path='rep_stable_video.avi', \n                     border_type='replicate')\n```\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cp align='center'\u003e\u003ccode\u003eborder_size=0\u003c/code\u003e\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp align='center'\u003e\u003ccode\u003eborder_size=100\u003c/code\u003e\u003c/p\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cp align='center'\u003e\u003cimg src='https://s3.amazonaws.com/python-vidstab/readme/stable_ostrich.gif'\u003e\u003c/p\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cp align='center'\u003e\u003cimg src='https://s3.amazonaws.com/python-vidstab/readme/wide_stable_ostrich.gif'\u003e\u003c/p\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n`border_type='reflect'`                 |  `border_type='replicate'`\n:--------------------------------------:|:-------------------------:\n![](https://s3.amazonaws.com/python-vidstab/readme/reflect_stable_ostrich.gif)  |  ![](https://s3.amazonaws.com/python-vidstab/readme/replicate_stable_ostrich.gif)\n\n*[Video](https://www.youtube.com/watch?v=9pypPqbV_GM) used with permission from [HappyLiving](https://www.facebook.com/happylivinginfl/)*\n\n### Using Frame Layering\n\n```python\nfrom vidstab import VidStab, layer_overlay, layer_blend\n\n# init vid stabilizer\nstabilizer = VidStab()\n\n# use vidstab.layer_overlay for generating a trail effect\nstabilizer.stabilize(input_path=INPUT_VIDEO_PATH,\n                     output_path='trail_stable_video.avi',\n                     border_type='black',\n                     border_size=100,\n                     layer_func=layer_overlay)\n\n\n# create custom overlay function\n# here we use vidstab.layer_blend with custom alpha\n#   layer_blend will generate a fading trail effect with some motion blur\ndef layer_custom(foreground, background):\n    return layer_blend(foreground, background, foreground_alpha=.8)\n\n# use custom overlay function\nstabilizer.stabilize(input_path=INPUT_VIDEO_PATH,\n                     output_path='blend_stable_video.avi',\n                     border_type='black',\n                     border_size=100,\n                     layer_func=layer_custom)\n```\n\n`layer_func=vidstab.layer_overlay`     |  `layer_func=vidstab.layer_blend`\n:--------------------------------------:|:-------------------------:\n![](https://s3.amazonaws.com/python-vidstab/readme/trail_stable_ostrich.gif)  |  ![](https://s3.amazonaws.com/python-vidstab/readme/blend_stable_ostrich.gif)\n\n*[Video](https://www.youtube.com/watch?v=9pypPqbV_GM) used with permission from [HappyLiving](https://www.facebook.com/happylivinginfl/)*\n\n\n### Automatic border sizing\n\n```python\nfrom vidstab import VidStab, layer_overlay\n\nstabilizer = VidStab()\n\nstabilizer.stabilize(input_path=INPUT_VIDEO_PATH,\n                     output_path='auto_border_stable_video.avi', \n                     border_size='auto',\n                     # frame layering to show performance of auto sizing\n                     layer_func=layer_overlay)\n```\n\n\u003cp align='center'\u003e\n  \u003cimg width='45%' src='https://s3.amazonaws.com/python-vidstab/readme/auto_border_stable_ostrich.gif'\u003e\n\u003c/p\u003e\n\n\n### Stabilizing a frame at a time\n\nThe method `VidStab.stabilize_frame()` can accept `numpy` arrays to allow stabilization processing a frame at a time.\nThis can allow pre/post processing for each frame to be stabilized; see examples below.\n\n#### Simplest form\n\n```python\nfrom vidstab.VidStab import VidStab\n\nstabilizer = VidStab()\nvidcap = cv2.VideoCapture('input_video.mov')\n\nwhile True:\n     grabbed_frame, frame = vidcap.read()\n     \n     if frame is not None:\n        # Perform any pre-processing of frame before stabilization here\n        pass\n     \n     # Pass frame to stabilizer even if frame is None\n     # stabilized_frame will be an all black frame until iteration 30\n     stabilized_frame = stabilizer.stabilize_frame(input_frame=frame,\n                                                   smoothing_window=30)\n     if stabilized_frame is None:\n         # There are no more frames available to stabilize\n         break\n     \n     # Perform any post-processing of stabilized frame here\n     pass\n```\n\n#### Example with object tracking\n\n```python\nimport os\nimport cv2\nfrom vidstab import VidStab, layer_overlay, download_ostrich_video\n\n# Download test video to stabilize\nif not os.path.isfile(\"ostrich.mp4\"):\n    download_ostrich_video(\"ostrich.mp4\")\n\n# Initialize object tracker, stabilizer, and video reader\nobject_tracker = cv2.TrackerCSRT_create()\nstabilizer = VidStab()\nvidcap = cv2.VideoCapture(\"ostrich.mp4\")\n\n# Initialize bounding box for drawing rectangle around tracked object\nobject_bounding_box = None\n\nwhile True:\n    grabbed_frame, frame = vidcap.read()\n\n    # Pass frame to stabilizer even if frame is None\n    stabilized_frame = stabilizer.stabilize_frame(input_frame=frame, border_size=50)\n\n    # If stabilized_frame is None then there are no frames left to process\n    if stabilized_frame is None:\n        break\n\n    # Draw rectangle around tracked object if tracking has started\n    if object_bounding_box is not None:\n        success, object_bounding_box = object_tracker.update(stabilized_frame)\n\n        if success:\n            (x, y, w, h) = [int(v) for v in object_bounding_box]\n            cv2.rectangle(stabilized_frame, (x, y), (x + w, y + h),\n                          (0, 255, 0), 2)\n\n    # Display stabilized output\n    cv2.imshow('Frame', stabilized_frame)\n\n    key = cv2.waitKey(5)\n\n    # Select ROI for tracking and begin object tracking\n    # Non-zero frame indicates stabilization process is warmed up\n    if stabilized_frame.sum() \u003e 0 and object_bounding_box is None:\n        object_bounding_box = cv2.selectROI(\"Frame\",\n                                            stabilized_frame,\n                                            fromCenter=False,\n                                            showCrosshair=True)\n        object_tracker.init(stabilized_frame, object_bounding_box)\n    elif key == 27:\n        break\n\nvidcap.release()\ncv2.destroyAllWindows()\n```\n\n\u003cp align='center'\u003e\n  \u003cimg width='50%' src='https://s3.amazonaws.com/python-vidstab/readme/obj_tracking_vidstab_1.gif'\u003e\n\u003c/p\u003e\n\n\n### Working with live video\n\nThe `VidStab` class can also process live video streams.  The underlying video reader is `cv2.VideoCapture`([documentation](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html)).\nThe relevant snippet from the documentation for stabilizing live video is:\n\n\u003e *Its argument can be either the device index or the name of a video file. Device index is just the number to specify which camera. Normally one camera will be connected (as in my case). So I simply pass 0 (or -1). You can select the second camera by passing 1 and so on.*\n\nThe `input_path` argument of the `VidStab.stabilize` method can accept integers that will be passed directly to `cv2.VideoCapture` as a device index.  You can also pass a device index to the `--input` argument for command line usage.\n\nOne notable difference between live feeds and video files is that webcam footage does not have a definite end point.\nThe options for ending a live video stabilization are to set the max length using the `max_frames` argument or to manually stop the process by pressing the \u003ckbd\u003eEsc\u003c/kbd\u003e key or the \u003ckbd\u003eQ\u003c/kbd\u003e key.\nIf `max_frames` is not provided then no progress bar can be displayed for live video stabilization processes.\n\n#### Example\n\n```python\nfrom vidstab import VidStab\n\nstabilizer = VidStab()\nstabilizer.stabilize(input_path=0,\n                     output_path='stable_webcam.avi',\n                     max_frames=1000,\n                     playback=True)\n```\n\n\u003cp align='center'\u003e\n  \u003cimg width='50%' src='https://s3.amazonaws.com/python-vidstab/readme/webcam_stable.gif'\u003e\n\u003c/p\u003e\n\n### Transform file writing \u0026 reading \n\n#### Generating and saving transforms to file\n\n```python\nimport numpy as np\nfrom vidstab import VidStab, download_ostrich_video\n\n# Download video if needed\ndownload_ostrich_video(INPUT_VIDEO_PATH)\n\n# Generate transforms and save to TRANSFORMATIONS_PATH as csv (no headers)\nstabilizer = VidStab()\nstabilizer.gen_transforms(INPUT_VIDEO_PATH)\nnp.savetxt(TRANSFORMATIONS_PATH, stabilizer.transforms, delimiter=',')\n```\n\nFile at `TRANSFORMATIONS_PATH` is of the form shown below.  The 3 columns represent delta x, delta y, and delta angle respectively.\n\n```\n-9.249733913760086068e+01,2.953221378387767970e+01,-2.875918912994855636e-02\n-8.801434576214279559e+01,2.741942225927152776e+01,-2.715232319470826938e-02\n```\n\n#### Reading and using transforms from file\n\nBelow example reads a file of transforms and applies to an arbitrary video.  The transform file is of the form shown in [above section](#generating-and-saving-transforms-to-file).\n\n```python\nimport numpy as np\nfrom vidstab import VidStab\n\n# Read in csv transform data, of form (delta x, delta y, delta angle):\ntransforms = np.loadtxt(TRANSFORMATIONS_PATH, delimiter=',')\n\n# Create stabilizer and supply numpy array of transforms\nstabilizer = VidStab()\nstabilizer.transforms = transforms\n\n# Apply stabilizing transforms to INPUT_VIDEO_PATH and save to OUTPUT_VIDEO_PATH\nstabilizer.apply_transforms(INPUT_VIDEO_PATH, OUTPUT_VIDEO_PATH)\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fadamspannbauer%2Fpython_video_stab","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fadamspannbauer%2Fpython_video_stab","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fadamspannbauer%2Fpython_video_stab/lists"}