{"id":13615869,"url":"https://github.com/awslabs/aws-streamer","last_synced_at":"2025-04-13T21:31:45.784Z","repository":{"id":44149121,"uuid":"323406556","full_name":"awslabs/aws-streamer","owner":"awslabs","description":"Video Processing for AWS","archived":false,"fork":false,"pushed_at":"2024-07-21T01:05:02.000Z","size":1549,"stargazers_count":44,"open_issues_count":10,"forks_count":9,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-03-13T02:37:48.467Z","etag":null,"topics":["aws-streamer","gstreamer","gstreamer-python","kinesis-video-streams","video-processing"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/awslabs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-12-21T17:39:31.000Z","updated_at":"2024-07-25T10:53:07.000Z","dependencies_parsed_at":"2024-01-16T23:30:46.220Z","dependency_job_id":"a5993e8c-205c-48f7-8712-6f16dc185dd8","html_url":"https://github.com/awslabs/aws-streamer","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awslabs%2Faws-streamer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awslabs%2Faws-streamer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awslabs%2Faws-streamer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awslabs%2Faws-streamer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/awslabs","download_url":"https://codeload.github.com/awslabs/aws-streamer/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248786386,"owners_count":21161442,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aws-streamer","gstreamer","gstreamer-python","kinesis-video-streams","video-processing"],"created_at":"2024-08-01T20:01:19.437Z","updated_at":"2025-04-13T21:31:40.769Z","avatar_url":"https://github.com/awslabs.png","language":"Python","readme":"# AWS Streamer\n\nThe AWS Streamer is a collection of video processing and streaming tools for AWS platform. It will enable users to stream from multiple camera sources, process the video streams and upload to the cloud and/or local storage. It can be used standalone on the edge device, inside AWS Lambda functions, AWS ECS container or running on an AWS IoT Greengrass as a Lambda.\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#key-features\"\u003eKey Features\u003c/a\u003e •\n  \u003ca href=\"#build\"\u003eBuild\u003c/a\u003e •\n  \u003ca href=\"#usage\"\u003eUsage\u003c/a\u003e •\n  \u003ca href=\"#notes\"\u003eNotes\u003c/a\u003e •\n  \u003ca href=\"#debugging\"\u003eDebugging\u003c/a\u003e •\n  \u003ca href=\"#security\"\u003eSecurity\u003c/a\u003e •\n  \u003ca href=\"#license\"\u003eLicense\u003c/a\u003e\n\u003c/p\u003e\n\n## Key Features\n\nList of features provided by this library:\n\n - Streams from multiple cameras in parallel locally and to the cloud\n - Upload video streams in chunks to the S3 and/or store in the local disk\n - Upload video streams directly to Kinesis Video Streams\n - Perform ML inference on the video stream\n - Run computer vision algorithm on the video streams\n - Preview live stream in the browser\n\n## Build\n\n### Prerequisites\n\n- Python 3.7 or newer\n\n- Install OS packages:\n\n    - [Ubuntu](INSTALL.md#ubuntu)\n    - [MacOS](INSTALL.md#macos)\n    - [Windows](INSTALL.md#windows)\n\n### Install\n\n- With pip:\n    ``` bash\n    pip install git+https://github.com/awslabs/aws-streamer.git\n    ```\n\n    or\n\n    ``` bash\n    git clone https://github.com/awslabs/aws-streamer.git\n    cd aws-streamer\n    pip install -v .\n    ```\n\n    To set extra CMake flags (see below table):\n    ``` bash\n    python3 setup.py install\n    python3 setup.py build_ext --cmake-args=-DBUILD_KVS=ON\n    ```\n\n- In place:\n    ``` bash\n    virtualenv venv\n    source venv/bin/activate\n\n    pip install --upgrade wheel pip setuptools\n    pip install --upgrade --requirement requirements.txt\n\n    ./build.sh [optional:CMAKE_FLAGS]\n    ```\n\n\n### CMake Options\n\n| CMake flag          | Description                       | Default value |\n| ------------------- | --------------------------------- | ------------- |\n| -DBUILD_KVS         | Build KVS GStreamer plug-in       | OFF           |\n| -DBUILD_KVS_WEBRTC  | Build KVS WebRTC binaries         | OFF           |\n| -BUILD_NEO_DLR      | Build SageMaker NEO runtime       | OFF           |\n| -BUILD_MXNET        | Build MXnet GStreamer plug-in     | OFF           |\n\n\n## Usage\n\n### Using JSON Configuration\n\n```bash\ncd examples/test_app\npython3 app.py ../configs/testsrc_display.json\n```\n\n### Demos\n\nThere are two full-blown demos available:\n- [IoT (Greengrass)](examples/greengrass/README.md)\n- [Serverless (Fargate)](examples/serverless/README.md)\n\nClick on links above to read more and see detailed architecture.\n\n### Using AWS Streamer as SDK\n\n``` python\nimport awstreamer\n\nclient = awstreamer.client()\n```\n\n- To stream from your camera to the KVS:\n    ``` python\n    client.start({\n        \"source\": {\n            \"name\": \"videotestsrc\",\n            \"is-live\": True,\n            \"do-timestamp\": True,\n            \"width\": 640,\n            \"height\": 480,\n            \"fps\": 30\n        },\n        \"sink\": {\n            \"name\": \"kvssink\",\n            \"stream-name\": \"TestStream\"\n        }\n    })\n    ```\n\n- To run multiple pipelines in parallel asynchronously (i.e. without waiting for the pipeline to finish):\n    ``` python\n    client.schedule({\n        \"pipeline_0\": {\n            \"source\": {\n                \"name\": \"videotestsrc\",\n                \"is-live\": True,\n                \"do-timestamp\": True,\n                \"width\": 640,\n                \"height\": 480,\n                \"fps\": 30\n            },\n            \"sink\": {\n                \"name\": \"kvssink\",\n                \"stream-name\": \"TestStream0\"\n            }\n        },\n        \"pipeline_1\": {\n            \"source\": {\n                \"name\": \"videotestsrc\",\n                \"is-live\": True,\n                \"do-timestamp\": True,\n                \"width\": 1280,\n                \"height\": 720,\n                \"fps\": 30\n            },\n            \"sink\": {\n                \"name\": \"kvssink\",\n                \"stream-name\": \"TestStream1\"\n            }\n        }\n    })\n    ```\n\n- To perform ML inference on the video stream:\n    ``` python\n    def my_callback(metadata):\n        print(\"Inference results: \" + str(metadata))\n\n    client.start({\n        \"pipeline\": \"DeepStream\",\n        \"source\": {\n            \"name\": \"filesrc\",\n            \"location\": \"/path/to/video.mp4\",\n            \"do-timestamp\": False\n        },\n        \"nvstreammux\": {\n            \"width\": 1280,\n            \"height\": 720,\n            \"batch-size\": 1\n        },\n        \"nvinfer\": {\n            \"enabled\": True,\n            \"config-file-path\": \"/path/to/nvinfer_config.txt\"\n        },\n        \"callback\": my_callback\n    })\n    ```\n\n- To start recording of video segments to disk:\n    ``` python\n    client.schedule({\n        \"camera_0\": {\n            \"pipeline\": \"DVR\",\n            \"source\": {\n                \"name\": \"videotestsrc\",\n                \"is-live\": True,\n                \"do-timestamp\": True,\n                \"width\": 640,\n                \"height\": 480,\n                \"fps\": 30\n            },\n            \"sink\": {\n                \"name\": \"splitmuxsink\",\n                \"location\": \"/video/camera_0/output_%02d.mp4\",\n                \"segment_duration\": \"00:01:00\",\n                \"time_to_keep_days\": 1\n            }\n        }\n    })\n    ```\n    The command above will start recording 1-minute video segments to the given location.\n\n- To get list of files within given timestamp:\n    ``` python\n    from awstreamer.utils.video import get_video_files_in_time_range\n\n    file_list = get_video_files_in_time_range(\n        path = \"/video/camera_0/\",\n        timestamp_from = \"2020-08-05 13:03:47\",\n        timestamp_to = \"2020-08-05 13:05:40\",\n    )\n    ```\n\n- To merge video files into a single one:\n    ``` python\n    from awstreamer.utils.video import merge_video_files\n\n    merged = merge_video_files(\n        files = file_list,\n        destination_file = \"merged.mkv\"\n    )\n    ```\n\n- To get video frame from any point in the pipeline:\n    ``` python\n    def my_callback(buffer):\n        '''\n        This function will be called on every frame.\n        Buffer is a ndarray, do with it what you like!\n        '''\n        print(\"Buffer info: %s, %s, %s\" % (str(type(buffer)), str(buffer.dtype), str(buffer.shape)))\n\n    client.start({\n        \"pipeline\": {\n            \"source\": \"videotestsrc\",\n            \"source_filter\": \"capsfilter\",\n            \"sink\": \"autovideosink\"\n        },\n        \"source\": {\n            \"is-live\": True,\n            \"do-timestamp\": True\n        },\n        \"source_filter\": {\n            \"caps\": \"video/x-raw,width=640,height=480,framerate=30/1\"\n        },\n        \"source_filter\": {\n            \"probes\": {\n                \"src\": my_callback\n            }\n        }\n    })\n    ```\n    Above code will attach the probe to the source (outbound) pad of the source_filter plug-in.\n\n## Notes\n\nIf you use AWS plug-in (e.g. KVS) outside of AWS environment (i.e. not in AWS Greengrass IoT, AWS Lambda, etc.), remember to set the following env variables:\n\n```bash\nexport AWS_ACCESS_KEY_ID=xxxxxxxxx\nexport AWS_SECRET_ACCESS_KEY=xxxxxxxxxx\nexport AWS_DEFAULT_REGION=us-east-1 (for example)\n```\n\n## Debugging\n\nTo enable more debugging information from Gstreamer elements, set this env variable:\n\n    export GST_DEBUG=4\n\nMore details here: https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html?gi-language=c\n\n## Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n\n## License\n\nThis project is licensed under the Apache-2.0 License.\n","funding_links":[],"categories":["HarmonyOS"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fawslabs%2Faws-streamer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fawslabs%2Faws-streamer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fawslabs%2Faws-streamer/lists"}