{"id":14004682,"url":"https://github.com/hacksider/Deep-Live-Cam","last_synced_at":"2025-07-23T20:31:48.048Z","repository":{"id":212428468,"uuid":"695864515","full_name":"hacksider/Deep-Live-Cam","owner":"hacksider","description":"real time face swap and one-click video deepfake with only a single image","archived":false,"fork":false,"pushed_at":"2024-11-08T17:51:31.000Z","size":106388,"stargazers_count":39728,"open_issues_count":40,"forks_count":5810,"subscribers_count":240,"default_branch":"main","last_synced_at":"2024-11-09T23:45:16.227Z","etag":null,"topics":["ai","ai-deep-fake","ai-face","ai-webcam","artificial-intelligence","deep-fake","deepfake","deepfake-webcam","faceswap","fake-webcam","gan","real-time-deepfake","realtime","realtime-deepfake","realtime-face-changer","video-deepfake","webcam","webcamera"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"agpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hacksider.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-09-24T13:19:31.000Z","updated_at":"2024-11-09T23:37:56.000Z","dependencies_parsed_at":"2024-11-17T15:36:34.229Z","dependency_job_id":null,"html_url":"https://github.com/hacksider/Deep-Live-Cam","commit_stats":null,"previous_names":["hacksider/deep-live-cam"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hacksider%2FDeep-Live-Cam","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hacksider%2FDeep-Live-Cam/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hacksider%2FDeep-Live-Cam/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hacksider%2FDeep-Live-Cam/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hacksider","download_url":"https://codeload.github.com/hacksider/Deep-Live-Cam/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":227345792,"owners_count":17767991,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","ai-deep-fake","ai-face","ai-webcam","artificial-intelligence","deep-fake","deepfake","deepfake-webcam","faceswap","fake-webcam","gan","real-time-deepfake","realtime","realtime-deepfake","realtime-face-changer","video-deepfake","webcam","webcamera"],"created_at":"2024-08-10T07:00:32.376Z","updated_at":"2025-07-23T20:31:48.028Z","avatar_url":"https://github.com/hacksider.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003eDeep-Live-Cam\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  Real-time face swap and video deepfake with a single click and only a single image.\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n\u003ca href=\"https://trendshift.io/repositories/11395\" target=\"_blank\"\u003e\u003cimg src=\"https://trendshift.io/api/badge/repositories/11395\" alt=\"hacksider%2FDeep-Live-Cam | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"media/demo.gif\" alt=\"Demo GIF\" width=\"800\"\u003e\n\u003c/p\u003e\n\n##  Disclaimer\n\nThis deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.\n\nWe are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.\n\n- Ethical Use: Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online.\n\n- Content Restrictions: The software includes built-in checks to prevent processing inappropriate media, such as nudity, graphic content, or sensitive material.\n\n- Legal Compliance: We adhere to all relevant laws and ethical guidelines. If legally required, we may shut down the project or add watermarks to the output.\n\n- User Responsibility: We are not responsible for end-user actions. Users must ensure their use of the software aligns with ethical standards and legal requirements.\n\nBy using this software, you agree to these terms and commit to using it in a manner that respects the rights and dignity of others.\n\nUsers are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.\n\n## Exclusive v2.1 Quick Start - Pre-built (Windows/Mac Silicon)\n\n  \u003ca href=\"https://deeplivecam.net/index.php/quickstart\"\u003e \u003cimg src=\"media/Download.png\" width=\"285\" height=\"77\" /\u003e\n\n##### This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU or Mac Silicon, And you'll receive special priority support.\n \n###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually. \n\n## TLDR; Live Deepfake in just 3 Clicks\n![easysteps](https://github.com/user-attachments/assets/af825228-852c-411b-b787-ffd9aac72fc6)\n1. Select a face\n2. Select which camera to use\n3. Press live!\n\n## Features \u0026 Uses - Everything is in real-time\n\n### Mouth Mask\n\n**Retain your original mouth for accurate movement using Mouth Mask**\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"media/ludwig.gif\" alt=\"resizable-gif\"\u003e\n\u003c/p\u003e\n\n### Face Mapping\n\n**Use different faces on multiple subjects simultaneously**\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"media/streamers.gif\" alt=\"face_mapping_source\"\u003e\n\u003c/p\u003e\n\n### Your Movie, Your Face\n\n**Watch movies with any face in real-time**\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"media/movie.gif\" alt=\"movie\"\u003e\n\u003c/p\u003e\n\n### Live Show\n\n**Run Live shows and performances**\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"media/live_show.gif\" alt=\"show\"\u003e\n\u003c/p\u003e\n\n### Memes\n\n**Create Your Most Viral Meme Yet**\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"media/meme.gif\" alt=\"show\" width=\"450\"\u003e \n  \u003cbr\u003e\n  \u003csub\u003eCreated using Many Faces feature in Deep-Live-Cam\u003c/sub\u003e\n\u003c/p\u003e\n\n### Omegle\n\n**Surprise people on Omegle**\n\n\u003cp align=\"center\"\u003e\n  \u003cvideo src=\"https://github.com/user-attachments/assets/2e9b9b82-fa04-4b70-9f56-b1f68e7672d0\" width=\"450\" controls\u003e\u003c/video\u003e\n\u003c/p\u003e\n\n## Installation (Manual)\n\n**Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the quickstart version.**\n\n\u003cdetails\u003e\n\u003csummary\u003eClick to see the process\u003c/summary\u003e\n\n### Installation\n\nThis is more likely to work on your computer but will be slower as it utilizes the CPU.\n\n**1. Set up Your Platform**\n\n-   Python (3.11 recommended)\n-   pip\n-   git\n-   [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - ```iex (irm ffmpeg.tc.ht)```\n-   [Visual Studio 2022 Runtimes (Windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)\n\n**2. Clone the Repository**\n\n```bash\ngit clone https://github.com/hacksider/Deep-Live-Cam.git\ncd Deep-Live-Cam\n```\n\n**3. Download the Models**\n\n1. [GFPGANv1.4](https://huggingface.co/hacksider/deep-live-cam/resolve/main/GFPGANv1.4.pth)\n2. [inswapper\\_128\\_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx)\n\nPlace these files in the \"**models**\" folder.\n\n**4. Install Dependencies**\n\nWe highly recommend using a `venv` to avoid issues.\n\n\nFor Windows:\n```bash\npython -m venv venv\nvenv\\Scripts\\activate\npip install -r requirements.txt\n```\nFor Linux:\n```bash\n# Ensure you use the installed Python 3.10\npython3 -m venv venv\nsource venv/bin/activate\npip install -r requirements.txt\n```\n\n**For macOS:**\n\nApple Silicon (M1/M2/M3) requires specific setup:\n\n```bash\n# Install Python 3.11 (specific version is important)\nbrew install python@3.11\n\n# Install tkinter package (required for the GUI)\nbrew install python-tk@3.10\n\n# Create and activate virtual environment with Python 3.11\npython3.11 -m venv venv\nsource venv/bin/activate\n\n# Install dependencies\npip install -r requirements.txt\n```\n\n** In case something goes wrong and you need to reinstall the virtual environment **\n\n```bash\n# Deactivate the virtual environment\nrm -rf venv\n\n# Reinstall the virtual environment\npython -m venv venv\nsource venv/bin/activate\n\n# install the dependencies again\npip install -r requirements.txt\n```\n\n**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).\n\n### GPU Acceleration\n\n**CUDA Execution Provider (Nvidia)**\n\n1. Install [CUDA Toolkit 12.8.0](https://developer.nvidia.com/cuda-12-8-0-download-archive)\n2. Install [cuDNN v8.9.7 for CUDA 12.x](https://developer.nvidia.com/rdp/cudnn-archive) (required for onnxruntime-gpu):\n   - Download cuDNN v8.9.7 for CUDA 12.x\n   - Make sure the cuDNN bin directory is in your system PATH\n3. Install dependencies:\n\n```bash\npip install -U torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128\npip uninstall onnxruntime onnxruntime-gpu\npip install onnxruntime-gpu==1.21.0\n```\n\n3. Usage:\n\n```bash\npython run.py --execution-provider cuda\n```\n\n**CoreML Execution Provider (Apple Silicon)**\n\nApple Silicon (M1/M2/M3) specific installation:\n\n1. Make sure you've completed the macOS setup above using Python 3.10.\n2. Install dependencies:\n\n```bash\npip uninstall onnxruntime onnxruntime-silicon\npip install onnxruntime-silicon==1.13.1\n```\n\n3. Usage (important: specify Python 3.10):\n\n```bash\npython3.10 run.py --execution-provider coreml\n```\n\n**Important Notes for macOS:**\n- You **must** use Python 3.10, not newer versions like 3.11 or 3.13\n- Always run with `python3.10` command not just `python` if you have multiple Python versions installed\n- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`\n- If you get model loading errors, check that your models are in the correct folder\n- If you encounter conflicts with other Python versions, consider uninstalling them:\n  ```bash\n  # List all installed Python versions\n  brew list | grep python\n  \n  # Uninstall conflicting versions if needed\n  brew uninstall --ignore-dependencies python@3.11 python@3.13\n  \n  # Keep only Python 3.11\n  brew cleanup\n  ```\n\n**CoreML Execution Provider (Apple Legacy)**\n\n1. Install dependencies:\n\n```bash\npip uninstall onnxruntime onnxruntime-coreml\npip install onnxruntime-coreml==1.21.0\n```\n\n2. Usage:\n\n```bash\npython run.py --execution-provider coreml\n```\n\n**DirectML Execution Provider (Windows)**\n\n1. Install dependencies:\n\n```bash\npip uninstall onnxruntime onnxruntime-directml\npip install onnxruntime-directml==1.21.0\n```\n\n2. Usage:\n\n```bash\npython run.py --execution-provider directml\n```\n\n**OpenVINO™ Execution Provider (Intel)**\n\n1. Install dependencies:\n\n```bash\npip uninstall onnxruntime onnxruntime-openvino\npip install onnxruntime-openvino==1.21.0\n```\n\n2. Usage:\n\n```bash\npython run.py --execution-provider openvino\n```\n\u003c/details\u003e\n\n## Usage\n\n**1. Image/Video Mode**\n\n-   Execute `python run.py`.\n-   Choose a source face image and a target image/video.\n-   Click \"Start\".\n-   The output will be saved in a directory named after the target video.\n\n**2. Webcam Mode**\n\n-   Execute `python run.py`.\n-   Select a source face image.\n-   Click \"Live\".\n-   Wait for the preview to appear (10-30 seconds).\n-   Use a screen capture tool like OBS to stream.\n-   To change the face, select a new source image.\n\n## Command Line Arguments (Unmaintained)\n\n```\noptions:\n  -h, --help                                               show this help message and exit\n  -s SOURCE_PATH, --source SOURCE_PATH                     select a source image\n  -t TARGET_PATH, --target TARGET_PATH                     select a target image or video\n  -o OUTPUT_PATH, --output OUTPUT_PATH                     select output file or directory\n  --frame-processor FRAME_PROCESSOR [FRAME_PROCESSOR ...]  frame processors (choices: face_swapper, face_enhancer, ...)\n  --keep-fps                                               keep original fps\n  --keep-audio                                             keep original audio\n  --keep-frames                                            keep temporary frames\n  --many-faces                                             process every face\n  --map-faces                                              map source target faces\n  --mouth-mask                                             mask the mouth region\n  --video-encoder {libx264,libx265,libvpx-vp9}             adjust output video encoder\n  --video-quality [0-51]                                   adjust output video quality\n  --live-mirror                                            the live camera display as you see it in the front-facing camera frame\n  --live-resizable                                         the live camera frame is resizable\n  --max-memory MAX_MEMORY                                  maximum amount of RAM in GB\n  --execution-provider {cpu} [{cpu} ...]                   available execution provider (choices: cpu, ...)\n  --execution-threads EXECUTION_THREADS                    number of execution threads\n  -v, --version                                            show program's version number and exit\n```\n\nLooking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.\n\n## Press\n\n**We are always open to criticism and are ready to improve, that's why we didn't cherry-pick anything.**\n\n - [*\"Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger\"*](https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/) - Ars Technica\n - [*\"Thanks Deep Live Cam, shapeshifters are among us now\"*](https://dataconomy.com/2024/08/15/what-is-deep-live-cam-github-deepfake/) - Dataconomy\n - [*\"This free AI tool lets you become anyone during video-calls\"*](https://www.newsbytesapp.com/news/science/deep-live-cam-ai-impersonation-tool-goes-viral/story) - NewsBytes\n - [*\"OK, this viral AI live stream software is truly terrifying\"*](https://www.creativebloq.com/ai/ok-this-viral-ai-live-stream-software-is-truly-terrifying) - Creative Bloq\n - [*\"Deepfake AI Tool Lets You Become Anyone in a Video Call With Single Photo\"*](https://petapixel.com/2024/08/14/deep-live-cam-deepfake-ai-tool-lets-you-become-anyone-in-a-video-call-with-single-photo-mark-zuckerberg-jd-vance-elon-musk/) - PetaPixel\n - [*\"Deep-Live-Cam Uses AI to Transform Your Face in Real-Time, Celebrities Included\"*](https://www.techeblog.com/deep-live-cam-ai-transform-face/) - TechEBlog\n - [*\"An AI tool that \"makes you look like anyone\" during a video call is going viral online\"*](https://telegrafi.com/en/a-tool-that-makes-you-look-like-anyone-during-a-video-call-is-going-viral-on-the-Internet/) - Telegrafi\n - [*\"This Deepfake Tool Turning Images Into Livestreams is Topping the GitHub Charts\"*](https://decrypt.co/244565/this-deepfake-tool-turning-images-into-livestreams-is-topping-the-github-charts) - Emerge\n - [*\"New Real-Time Face-Swapping AI Allows Anyone to Mimic Famous Faces\"*](https://www.digitalmusicnews.com/2024/08/15/face-swapping-ai-real-time-mimic/) - Digital Music News\n - [*\"This real-time webcam deepfake tool raises alarms about the future of identity theft\"*](https://www.diyphotography.net/this-real-time-webcam-deepfake-tool-raises-alarms-about-the-future-of-identity-theft/) - DIYPhotography\n - [*\"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude\"*](https://www.youtube.com/watch?time_continue=1074\u0026v=py4Tc-Y8BcY) - SomeOrdinaryGamers\n - [*\"Alright look look look, now look chat, we can do any face we want to look like chat\"*](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared\u0026t=2686) - IShowSpeed\n - [*\"They do a pretty good job matching poses, expression and even the lighting\"*](https://www.youtube.com/watch?v=wnCghLjqv3s\u0026t=551s) - TechLinked (LTT)\n\n\n## Credits\n\n-   [ffmpeg](https://ffmpeg.org/): for making video-related operations easy\n-   [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).\n-   [havok2-htwo](https://github.com/havok2-htwo): for sharing the code for webcam\n-   [GosuDRM](https://github.com/GosuDRM): for the open version of roop\n-   [pereiraroland26](https://github.com/pereiraroland26): Multiple faces support\n-   [vic4key](https://github.com/vic4key): For supporting/contributing to this project\n-   [kier007](https://github.com/kier007): for improving the user experience\n-   [qitianai](https://github.com/qitianai): for multi-lingual support\n-   and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.\n-   Footnote: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)\n-   All the wonderful users who helped make this project go viral by starring the repo ❤️\n\n[![Stargazers](https://reporoster.com/stars/hacksider/Deep-Live-Cam)](https://github.com/hacksider/Deep-Live-Cam/stargazers)\n\n## Contributions\n\n![Alt](https://repobeats.axiom.co/api/embed/fec8e29c45dfdb9c5916f3a7830e1249308d20e1.svg \"Repobeats analytics image\")\n\n## Stars to the Moon 🚀\n\n\u003ca href=\"https://star-history.com/#hacksider/deep-live-cam\u0026Date\"\u003e\n \u003cpicture\u003e\n   \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=hacksider/deep-live-cam\u0026type=Date\u0026theme=dark\" /\u003e\n   \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=hacksider/deep-live-cam\u0026type=Date\" /\u003e\n   \u003cimg alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=hacksider/deep-live-cam\u0026type=Date\" /\u003e\n \u003c/picture\u003e\n\u003c/a\u003e\n","funding_links":[],"categories":["Python","热门 LLM 项目","ai","视频动画","Trending LLM Projects","🆕 New and Emerging AI Tools","Repos","\u003ca name=\"Python\"\u003e\u003c/a\u003ePython","App"],"sub_categories":["Other Cloud Provider Credits"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhacksider%2FDeep-Live-Cam","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhacksider%2FDeep-Live-Cam","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhacksider%2FDeep-Live-Cam/lists"}