{"id":24397850,"url":"https://github.com/degirum/dg_gstreamer_plugin","last_synced_at":"2025-06-19T20:41:23.777Z","repository":{"id":166541570,"uuid":"618635473","full_name":"DeGirum/dg_gstreamer_plugin","owner":"DeGirum","description":"DeGirum GStreamer AI plugin","archived":false,"fork":false,"pushed_at":"2023-05-17T18:07:34.000Z","size":3162,"stargazers_count":4,"open_issues_count":0,"forks_count":1,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-04-22T22:48:47.161Z","etag":null,"topics":["ai","cpp","degirum","gstreamer","gstreamer-plugins","ml"],"latest_commit_sha":null,"homepage":"https://degirum.com","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DeGirum.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-03-24T23:07:01.000Z","updated_at":"2023-05-17T21:06:25.000Z","dependencies_parsed_at":null,"dependency_job_id":"7b41de59-4c64-4c3d-adf6-87782c1ca269","html_url":"https://github.com/DeGirum/dg_gstreamer_plugin","commit_stats":null,"previous_names":["degirum/dg_gstreamer_plugin"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/DeGirum/dg_gstreamer_plugin","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeGirum%2Fdg_gstreamer_plugin","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeGirum%2Fdg_gstreamer_plugin/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeGirum%2Fdg_gstreamer_plugin/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeGirum%2Fdg_gstreamer_plugin/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DeGirum","download_url":"https://codeload.github.com/DeGirum/dg_gstreamer_plugin/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeGirum%2Fdg_gstreamer_plugin/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":260827630,"owners_count":23069001,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","cpp","degirum","gstreamer","gstreamer-plugins","ml"],"created_at":"2025-01-19T22:49:23.695Z","updated_at":"2025-06-19T20:41:18.768Z","avatar_url":"https://github.com/DeGirum.png","language":"C++","readme":"# dg_gstreamer_plugin\nThis is the repository for the DeGirum GStreamer Plugin.\nCompatible with NVIDIA DeepStream pipelines, it is capable of running AI inference using DeGirum Orca\u0026trade; AI hardware accelerator on upstream buffers \nand outputting NVIDIA metadata for use by downstream elements. \n\nThe element takes in a decoded video stream, performs AI inference on each frame, and adds AI inference results as metadata. Downstream elements in the \npipeline can process this metadata using standard methods from NVIDIA DeepStream's element library, such as displaying results on screen or sending data to a server.\n\nFor more information on NVIDIA's DeepStream SDK and elements to be used in conjunction with this plugin refer to [DeepStream Plugin Guide].\n\n## Table of Contents\n- [Example pipelines to show the plugin in action](#example-pipelines-to-show-the-plugin-in-action)\n  - [Inference and visualization of 1 input video](#1-inference-and-visualization-of-1-input-video)\n  - [Inference and visualization of two models on one video](#2-inference-and-visualization-of-two-models-on-one-video)\n  - [Inference and visualization of one model on two videos](#3-inference-and-visualization-of-one-model-on-two-videos)\n  - [Inference and visualization of one model on 4 videos with frame skipping only](#4-inference-and-visualization-of-one-model-on-4-videos-with-frame-skipping-only)\n  - [Improved inference and visualization of 4 videos with capped framerates and frame skipping](#5-improved-inference-and-visualization-of-4-videos-with-capped-framerates-and-frame-skipping)\n  - [Improved inference and visualization of 8 videos with capped framerates and frame skipping](#6-improved-inference-and-visualization-of-8-videos-with-capped-framerates-and-frame-skipping)\n  - [Inference and visualization using a cloud model](#7-inference-and-visualization-using-a-cloud-model)\n  - [Inference without visualization (model benchmark) example](#8-inference-without-visualization-model-benchmark-example)\n  - [Inference and visualization with tracking](#9-inference-and-visualization-with-tracking)\n  - [Inference and visualization of Pose Estimation](#10-inference-and-visualization-of-pose-estimation)\n  - [Inference and visualization of Classification](#11-inference-and-visualization-of-classification)\n  - [Inference and visualization of Segmentation](#12-inference-and-visualization-of-segmentation)\n- [Plugin Properties](#plugin-properties)\n- [Installation](#dependencies)\n\n\n\n***\n\n# Example pipelines to show the plugin in action\n\n\n## 1. Inference and visualization of 1 input video\n```sh\ngst-launch-1.0 nvurisrcbin uri=file:///\u003cvideo-file-location\u003e ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model_name=\u003cmodel-name\u003e ! nvvideoconvert ! queue ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0\n```\n![1videoInference](https://user-images.githubusercontent.com/126506976/230221405-5cc76511-2896-465c-a5db-f32923a9b074.png)\n\n\n## 2. Inference and visualization of two models on one video\n```sh\ngst-launch-1.0 nvurisrcbin uri=file:///\u003cvideo-file-location\u003e ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! dgaccelerator server_ip=\u003cserver-ip\u003e model_name=\u003cmodel-one-name\u003e drop-frames=false box-color=pink ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model_name=\u003cmodel-two-name\u003e drop-frames=false box-color=red ! queue ! nvvideoconvert ! queue ! nvdsosd process-mode=1 ! nvegltransform ! nveglglessink enable-last-sample=0\n```\n![1video2models-color](https://user-images.githubusercontent.com/126506976/232162664-c9a50e45-afda-4c10-8a81-c4c0f93e0de5.png)\n*Note the usage of the box-color property to distinguish between the two models.*\n\n## 3. Inference and visualization of one model on two videos\n```sh\ngst-launch-1.0 nvurisrcbin uri=file://\u003cvideo-file-1-location\u003e ! m.sink_0 nvstreammux name=m live-source=0 buffer-pool-size=4 batch-size=2 batched-push-timeout=23976023 sync-inputs=true width=1920 height=1080 enable-padding=0 nvbuf-memory-type=0 ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model_name=\u003cmodel-name\u003e ! nvmultistreamtiler rows=1 columns=2 width=1920 height=1080 nvbuf-memory-type=0 ! queue ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0 \\\nnvurisrcbin uri=file://\u003cvideo-file-2-location\u003e ! m.sink_1\n```\n![2videosInference](https://user-images.githubusercontent.com/126506976/230218942-28565ccc-d34a-478f-acb0-6493a5f5f2a5.png)\n\n## 4. Inference and visualization of one model on 4 videos with frame skipping only\n```sh\ngst-launch-1.0 nvurisrcbin file-loop=true uri=file://\u003cvideo-file-1-location\u003e ! m.sink_0 nvstreammux name=m batch-size=4 batched-push-timeout=23976023 sync-inputs=true width=1920 height=1080 enable-padding=0 nvbuf-memory-type=0 ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model_name=\u003cmodel-name\u003e ! nvmultistreamtiler rows=2 columns=2 width=1920 height=1080 nvbuf-memory-type=0 ! queue ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-2-location\u003e ! m.sink_1 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-3-location\u003e ! m.sink_2 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-4-location\u003e ! m.sink_3\n```\nThis pipeline performs inference on 4 input streams at once and visualizes the results in a window. Because we haven't assumed anything about the framerates of the input streams as well as model processing speed, frame skipping is likely to occur to keep up with real-time visualization.\n![4videosInference](https://user-images.githubusercontent.com/126506976/231277598-fb717798-188c-4244-a086-de643aeb0c1e.png)\n\n## 5. Improved inference and visualization of 4 videos with capped framerates and frame skipping\n```sh\ngst-launch-1.0 nvurisrcbin file-loop=true uri=file://\u003cvideo-file-1-location\u003e ! videorate drop-only=true max-rate=\u003cframerate\u003e ! m.sink_0 nvstreammux name=m batch-size=4 batched-push-timeout=23976023 sync-inputs=true width=1920 height=1080 enable-padding=0 nvbuf-memory-type=0 ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model_name=\u003cmodel-name\u003e ! nvmultistreamtiler rows=2 columns=2 width=1920 height=1080 nvbuf-memory-type=0 ! queue ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-2-location\u003e ! videorate drop-only=true max-rate=\u003cframerate\u003e ! m.sink_1 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-3-location\u003e ! videorate drop-only=true max-rate=\u003cframerate\u003e ! m.sink_2 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-4-location\u003e ! videorate drop-only=true max-rate=\u003cframerate\u003e ! m.sink_3\n```\nThe difference between this pipeline and the previous one is the addition of 4 ```videorate``` elements for maximum reliability of the pipeline. Each of these elements is set to drop upstream frames to conform to a maximum framerate. This ensures that the framerate of inference is at most ```4 * \u003cframerate\u003e```, which is useful to ensure continuous operation of inference on the 4 streams. However, frame skipping is still enabled and will happen if needed to ensure a smooth pipeline.\n\n## 6. Improved inference and visualization of 8 videos with capped framerates and frame skipping\n\nStemming directly from the previous example, it's possible to achieve smooth inference on 8 (or more) videos at once so long as their framerates are capped.\n\n```sh\ngst-launch-1.0 nvurisrcbin file-loop=true uri=file://\u003cvideo-file-1-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_0 \\\nnvstreammux name=m live-source=0 buffer-pool-size=4 batch-size=8 batched-push-timeout=23976023 sync-inputs=true width=1920 height=1080 enable-padding=0 nvbuf-memory-type=0 ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model_name=\u003cmodel-name\u003e ! nvmultistreamtiler rows=2 columns=4 width=1920 height=1080 nvbuf-memory-type=0 ! queue ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-2-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_1 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-3-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_2 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-4-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_3 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-5-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_4 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-6-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_5 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-7-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_6 \\\nnvurisrcbin file-loop=true uri=file://\u003cvideo-file-8-location\u003e ! videorate drop-only=true max-rate=12 ! m.sink_7\n```\n![8videosInference](https://user-images.githubusercontent.com/126506976/231595499-87f37c00-6a57-453f-9181-b62d5e75b930.png)\n\n*Note that this pipeline runs smoothly provided we know that ```\u003cmodel-name\u003e``` can handle at least 8 * 12 = 96 frames per second.*\n\n## 7. Inference and visualization using a cloud model\n```sh\ngst-launch-1.0 nvurisrcbin uri=file://\u003cvideo-file-location\u003e ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! dgaccelerator server_ip=\u003cserver-ip\u003e model-name=\u003ccloud-model-location\u003e cloud-token=\u003ctoken\u003e ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink\n```\nHere, we have added the additional parameter ```cloud-token=\u003ctoken\u003e```.\n\n## 8. Inference without visualization (model benchmark) example\n```sh\ngst-launch-1.0 nvurisrcbin uri=file://\u003cvideo-file-location\u003e ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model-name=\u003cmodel-name\u003e drop-frames=false ! fakesink enable-last-sample=0\n```\nNote the addition of ```drop-frames=false```, as we don't care about syncing to real-time. This pipeline is also useful for finding the maximum processing speed of a model.\n\n## 9. Inference and visualization with tracking\n```sh\ngst-launch-1.0 nvurisrcbin uri=file://\u003cvideo-file-location\u003e ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! queue ! dgaccelerator server_ip=\u003cserver-ip\u003e model-name=\u003cmodel-name\u003e drop-frames=false ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! nvvideoconvert ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0\n```\n![1videoInferenceTracking](https://user-images.githubusercontent.com/126506976/234064084-11fdc31a-97dc-491b-a651-132823f5285e.png)\n\nHere, this pipeline uses a default ```nvtracker``` configuration. It can be configured by modifying the properties ```ll-config-file``` and ```ll-lib-file``` in nvtracker.\n***\n\n## 10. Inference and visualization of Pose Estimation\n```sh\ngst-launch-1.0 nvurisrcbin uri=file://\u003cvideo-file-location\u003e ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! queue ! dgaccelerator processing-width=\u003cwidth\u003e processing-height=\u003cheight\u003e server_ip=\u003cserver-ip\u003e model-name=\u003cmodel-name\u003e drop-frames=false ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0\n```\n![1videoPoseEstimation](https://github.com/DeGirum/dg_gstreamer_plugin/assets/126506976/ebe9be93-9031-4691-8301-d9080a366a9e)\n\n## 11. Inference and visualization of Classification\n```sh\ngst-launch-1.0 nvurisrcbin uri=file://\u003cvideo-file-location\u003e ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! queue ! dgaccelerator processing-width=\u003cwidth\u003e processing-height=\u003cheight\u003e server_ip=\u003cserver-ip\u003e model-name=\u003cmodel-name\u003e drop-frames=false ! nvdsosd ! queue ! nvegltransform ! nveglglessink enable-last-sample=0\n```\n![1videoClassification](https://github.com/DeGirum/dg_gstreamer_plugin/assets/126506976/e51c5c1a-9863-4a4d-a038-2f39aec82450)\n\n## 12. Inference and visualization of Segmentation\n```sh\ngst-launch-1.0 nvurisrcbin file-loop=true uri=file://\u003cvideo-file-location\u003e ! videorate drop-only=true max-rate=24 ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! queue ! dgaccelerator processing-width=\u003cwidth\u003e processing-height=\u003cheight\u003e server_ip=\u003cserver-ip\u003e model-name=\u003cmodel-name\u003e drop-frames=false ! nvsegvisual width=1920 height=1080 ! queue ! nvegltransform ! nveglglessink enable-last-sample=0\n```\n![1videoSegmentation](https://github.com/DeGirum/dg_gstreamer_plugin/assets/126506976/7d5a447c-0d71-40dc-9a73-a30b67d8741f)\n\nFor Segmentation models, it is helpful to cap the framerate of the incoming video to match the model speed using the ```videorate``` element.\n\n# Plugin Properties\n\nThe DgAccelerator element has several parameters than can be set to configure inference.\n\n| Property Name | Default Value | Description |\n|---------------|---------------|-------------|\n| `box-color`   | `red`         | The color of the boxes in visualization pipelines. Choose from red, green, blue, cyan, pink, yellow, black. |\n| `cloud-token` | `null`        | The [DeGirum Cloud API access token](https://cs.degirum.com) needed to allow connection to DeGirum cloud models. See example 7. |\n| `drop-frames` | `true`        | If enabled, frames may be dropped in order to keep up with the real-time processing speed. This is useful when visualizing output in a pipeline. However, if your pipeline does not need visualization you can disable this feature. Similarly, if you know that the upstream framerate (i.e. the rate at which the video data is being captured) is slower than the maximum processing speed of your model, it might be better to disable frame dropping. This can help ensure that all of the available frames are processed and none are lost, which could be important for certain applications. See examples 4, 5, 6, and 8. |\n| `gpu-id`      | `0`           | The index of the GPU for hardware acceleration, usually only changed when multiple GPU's are installed.  |\n| `model-name`  | `yolo_v5s_coco--512x512_quant_n2x_orca_1` | The full name of the DeGirum AI model to be used for inference. |\n| `processing-height` | `512` | The height of the accepted input stream for the model. |\n| `processing-width`  | `512` | The width of the accepted input stream for the model. |\n| `server-ip`   | `localhost` | The DeGirum AI server IP address or hostname to connect to for running AI inferences. |\n\nThese properties can be easily set within a `gst-launch-1.0` command, using the following syntax:\n```sh\ngst-launch-1.0 (...) ! dgaccelerator property1=value1 property2=value2 ! (...)\n```\n\n***\n\n# Dependencies\n\nThis plugin requires a [DeepStream installation], a [GStreamer installation], and an [OpenCV installation].\n\n| Package | Required Version |\n| ------ | ------ |\n| DeepStream | 6.2+ |\n| GStreamer | 1.16.3+ |\n| OpenCV | 4.2+ |\n\n\n**Installation Steps:**\n\n1. Clone the repository:\n    \n    ```git clone https://github.com/DeGirum/dg_gstreamer_plugin.git```\n\n1. Enter the directory:\n    \n    ```cd dg_gstreamer_plugin```\n\n1. Update the submodule:\n   \n   ```git submodule update --init```\n\n\u003e We have provided a shell script to install the above [dependencies](#dependencies). \n\u003e\n\u003e **If you already have DeepStream and OpenCV installed, please skip to [building the plugin](#build-the-plugin).**\n\n## Install dependencies with our script (optional):\n\n### For Jetson systems:\n\n1.  Download the Jetson tar file for DeepStream from [this link](https://developer.nvidia.com/downloads/deepstream-sdk-v620-jetson-tbz2)\n\n2.  Run the script with the tar file's location as an argument: \n\n```./installDependencies.sh path/to/TarFile.tbz2 ```\n\n### For non-Jetson systems (systems with a dGPU):\n\n1.  Download the dGPU tar file for DeepStream from [this link](https://developer.nvidia.com/downloads/deepstream-sdk-v620-x86-64-tbz2)\n\n2.  Download the NVIDIA driver version 525.85.12 from [this link](https://www.nvidia.com/Download/driverResults.aspx/198879/en-us/)\n\n3.  Run the script with the tar file and driver file location as arguments: \n\n```./installDependencies.sh path/to/TarFile.tbz2 path/to/DriverFile.run```\n\n\u003e Alternatively, you can install DeepStream without the NVIDIA drivers if you already have them:\n\u003e\n\u003e ```./installDependencies.sh path/to/TarFile.tbz2```\n\n## Build the plugin\n\nTo build the plugin, while within the directory of the cloned repository, execute the following commands:\n\n```\nmkdir build \u0026\u0026 cd build\ncmake ..\ncmake --build . \nsudo cmake --install .\n```\n\nNow, GStreamer pipelines have access to the element ```dgaccelerator``` for accelerating video processing tasks using NVIDIA DeepStream.\n\n[DeepStream Plugin Guide]:\u003chttps://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_Intro.html\u003e\n[DeepStream installation]:\u003chttps://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html\u003e\n[GStreamer installation]:\u003chttps://gstreamer.freedesktop.org/documentation/installing/index.html\u003e\n[OpenCV installation]:\u003chttps://github.com/DeGirum/dg_gstreamer_plugin/assets/126506976/acfb55c2-eb85-4552-9a6f-ef7af7352e0b\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdegirum%2Fdg_gstreamer_plugin","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdegirum%2Fdg_gstreamer_plugin","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdegirum%2Fdg_gstreamer_plugin/lists"}