{"id":28203195,"url":"https://github.com/a3ro-dev/autofaceframing","last_synced_at":"2025-06-22T22:38:18.927Z","repository":{"id":291098252,"uuid":"976580022","full_name":"a3ro-dev/autoFaceFraming","owner":"a3ro-dev","description":"Auto Face Framing is a Python application designed to provide a virtual camera output that detects and tracks the user's face in real-time. The application utilizes OpenCV for face detection and tracking, ensuring that the user's face remains centered in the frame while providing a smooth video output. ","archived":false,"fork":false,"pushed_at":"2025-05-09T16:28:22.000Z","size":9884,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-05-17T01:16:07.325Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/a3ro-dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-05-02T11:12:32.000Z","updated_at":"2025-05-09T16:28:25.000Z","dependencies_parsed_at":"2025-05-02T12:26:11.396Z","dependency_job_id":"71a32209-495f-4fbd-a0ce-4299ccb17f31","html_url":"https://github.com/a3ro-dev/autoFaceFraming","commit_stats":null,"previous_names":["a3ro-dev/autofaceframing"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/a3ro-dev/autoFaceFraming","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/a3ro-dev%2FautoFaceFraming","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/a3ro-dev%2FautoFaceFraming/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/a3ro-dev%2FautoFaceFraming/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/a3ro-dev%2FautoFaceFraming/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/a3ro-dev","download_url":"https://codeload.github.com/a3ro-dev/autoFaceFraming/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/a3ro-dev%2FautoFaceFraming/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259688249,"owners_count":22896371,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-05-17T01:13:47.155Z","updated_at":"2025-06-13T17:31:58.122Z","avatar_url":"https://github.com/a3ro-dev.png","language":"Python","readme":"# Auto Face Framing\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n**Auto Face Framing** is a Python application that creates a virtual camera output, intelligently detecting and tracking the user's face in real-time. It ensures your face remains centered and well-framed during video calls, enhancing your professional appearance. The application uses OpenCV for robust face detection and tracking.\n\n## Overview\n\nTired of manually adjusting your camera during video conferences? Auto Face Framing takes care of it for you! It provides a smooth, stabilized virtual camera feed that can be used with popular video conferencing software like Zoom, Microsoft Teams, Google Meet, and more.\n\n## Key Features\n\n*   **Real-time Face Detection \u0026 Tracking**: Utilizes OpenCV for accurate and efficient face detection.\n*   **Automatic Framing \u0026 Zoom**: Intelligently adjusts the frame and zoom level to keep your face centered and appropriately sized.\n*   **Smooth Camera Movements**: Implements algorithms for smooth and natural-feeling camera adjustments.\n*   **Virtual Camera Output**: Creates a virtual camera device recognized by most video conferencing applications.\n*   **Cross-Platform**: Supports Windows, macOS, and Linux.\n*   **Highly Configurable**: Adjust settings like resolution, FPS, framing preferences, and more via a YAML configuration file.\n*   **Debug Mode**: Optional overlay to display real-time detection information and performance metrics.\n*   **CLI Interface**: Easy-to-use command-line interface for starting and configuring the application.\n*   **Fancy Terminal UI**: Provides an enhanced terminal experience with spinners and colored output (can be disabled).\n\n## Project Structure\n\nA brief overview of the project's layout:\n\n```\nautoFaceFraming/\n├── autoFaceFraming/            # Main package directory\n│   ├── __init__.py\n│   ├── cli.py                  # Command-line interface logic\n│   ├── camera_stream.py        # Handles camera input, processing, and output\n│   ├── face_detector.py        # Face detection algorithms\n│   ├── tracker.py              # Face tracking logic\n│   ├── config/\n│   │   └── settings.yaml       # Default application settings\n│   ├── models/                 # Directory for detection models\n│   │   └── face_detector/\n│   │       ├── deploy.prototxt # DNN model architecture\n│   │       └── res10_300x300_ssd_iter_140000.caffemodel # DNN model weights\n│   └── utils/                  # Utility modules\n│       ├── __init__.py\n│       ├── cli_spinner.py      # Animated spinner for CLI\n│       └── video_utils.py      # Video processing utilities\n├── config/                     # User-specific configuration (copied from package on first run)\n│   └── settings.yaml\n├── models/                     # User-specific models (copied or downloaded)\n│   └── face_detector/\n├── tests/                      # Unit and integration tests\n│   └── ...\n├── install.sh                  # Installation script for Linux/macOS\n├── setup_windows.bat           # Windows setup helper script\n├── requirements.txt            # Python package dependencies\n├── setup.py                    # Script for packaging and distribution\n├── README.md                   # This file\n├── TUTORIAL.md                 # Detailed tutorial\n└── LICENSE                     # Project license\n```\n\n## Prerequisites\n\nBefore you begin, ensure you have the following installed:\n\n*   **Python**: Version 3.10+ recommended.\n*   **pip**: Python package installer.\n*   **Git**: For cloning the repository.\n\n### OS-Specific Prerequisites:\n\n*   **Windows**:\n    *   A virtual camera driver. We recommend **OBS Studio** (which includes OBS Virtual Camera) or **Unity Capture**.\n    *   If Python is not in your PATH, `setup_windows.bat` will attempt to add it.\n*   **Linux**:\n    *   `v4l2loopback` kernel module for creating virtual cameras.\n        *   Debian/Ubuntu: `sudo apt install v4l2loopback-dkms`\n        *   Fedora: `sudo dnf install v4l2loopback`\n        *   After installation, load the module: `sudo modprobe v4l2loopback`\n*   **macOS**:\n    *   A virtual camera driver. We recommend **OBS Studio** (which includes OBS Virtual Camera).\n\n## Installation\n\n1.  **Clone the Repository**:\n    ```bash\n    git clone https://github.com/a3ro-dev/autoFaceFraming.git\n    cd autoFaceFraming\n    ```\n\n2.  **Run the Installer**:\n\n    *   **Windows**:\n        Open Command Prompt or PowerShell **as Administrator** and run:\n        ```batch\n        setup_windows.bat\n        ```\n        This script will:\n        *   Attempt to add Python to your PATH if not already configured.\n        *   Guide you through installing necessary Python packages.\n        *   Set up the `start-face-framing.bat` convenience script.\n\n    *   **macOS and Linux**:\n        Open your terminal and run:\n        ```bash\n        ./install.sh\n        ```\n        This script will:\n        *   Guide you through installing necessary Python packages.\n        *   Set up the `start-face-framing.sh` convenience script and make it executable.\n        *   Install the package in user mode, making the `start-face-framing` command available.\n\n    The installer will also create a default `config/settings.yaml` if one doesn't exist.\n\n## Usage\n\nOnce installed, you can start Auto Face Framing using one of the following methods:\n\n*   **Recommended (all platforms, if PATH is configured by installer)**:\n    ```bash\n    start-face-framing\n    ```\n\n*   **Windows (alternative)**:\n    ```batch\n    ./start-face-framing.bat\n    ```\n\n*   **Linux/macOS (alternative)**:\n    ```bash\n    ./start-face-framing.sh\n    ```\n    or\n    ```bash\n    python3 -m autoFaceFraming.cli\n    ```\n\n### Command-Line Options\n\nYou can view available command-line options by running:\n```bash\nstart-face-framing --help\n```\nThis will show options for specifying a custom config file, resolution, FPS, disabling virtual camera output, and more.\n\n### Selecting the Virtual Camera\n\nAfter starting the application, \"Auto Face Framing Virtual Camera\" (or a similar name, depending on your OS and virtual camera driver) should appear as a camera option in your video conferencing software (e.g., Zoom, Teams, Meet). Select it to use the auto-framed video feed.\n\n## Configuration\n\nThe application's behavior can be customized through the `config/settings.yaml` file. If this file doesn't exist in the project's root `config` directory, the application will copy the default settings from the package's `autoFaceFraming/config/settings.yaml` upon first run.\n\nKey configurable parameters include:\n\n*   **Camera Settings**:\n    *   `camera_index`: Index of the physical camera to use (e.g., 0, 1). -1 for auto-detect.\n    *   `resolution`: Desired camera resolution (e.g., `width: 1280`, `height: 720`).\n    *   `frame_rate`: Desired camera frame rate (e.g., 24, 30).\n*   **Face Detection**:\n    *   `detector_type`: `haar` (faster, less accurate) or `dnn` (slower, more accurate).\n    *   `dnn_model_path`, `dnn_proto_path`: Paths to DNN model files.\n    *   `confidence_threshold`: Minimum confidence for DNN face detection.\n*   **Framing Logic**:\n    *   `target_face_scale`: Desired scale of the face relative to the frame height.\n    *   `zoom_speed`: How quickly the camera zooms.\n    *   `pan_speed`: How quickly the camera pans.\n    *   `smoothing_factor`: Level of smoothing applied to camera movements.\n*   **Virtual Camera**:\n    *   `virtual_camera_name`: Name for the virtual camera device.\n    *   `output_resolution`: Resolution for the virtual camera output.\n*   **UI/Debug**:\n    *   `show_debug_hud`: Display a debug overlay on the preview window.\n    *   `show_preview_window`: Whether to display a local preview window.\n    *   `fancy_terminal_ui`: Enable/disable enhanced terminal output.\n    *   `spinner_style`: Style of the loading spinner in the terminal.\n\n## Troubleshooting\n\n*   **Virtual Camera Not Found**:\n    *   Ensure you have a virtual camera driver installed (OBS, Unity Capture for Windows; v4l2loopback for Linux; OBS for macOS).\n    *   For Linux, make sure the `v4l2loopback` module is loaded (`sudo modprobe v4l2loopback`). You might need to specify `exclusive_caps=1` when loading if some applications don't see the camera: `sudo modprobe v4l2loopback exclusive_caps=1 video_nr=10 card_label=\"AutoFrameCam\"`. The `video_nr` should be an unused video device number.\n    *   Restart your video conferencing application after starting Auto Face Framing.\n*   **Low Performance/Lag**:\n    *   Try a lower camera resolution or frame rate in `config/settings.yaml`.\n    *   Switch to the `haar` face detector if using `dnn`.\n    *   Ensure your system meets the general requirements for real-time video processing.\n    *   Close other resource-intensive applications.\n*   **Python Not Found (Windows)**:\n    *   Ensure Python was added to your PATH during its installation. The `setup_windows.bat` script attempts to do this, but manual configuration might be needed in some cases.\n*   **Permission Denied (Linux/macOS)**:\n    *   Make sure `install.sh` and `start-face-framing.sh` are executable: `chmod +x install.sh start-face-framing.sh`.\n\n## Dependencies\n\nThe project relies on several Python libraries. Key dependencies are listed in `requirements.txt` and include:\n\n*   **opencv-python**: For all computer vision tasks.\n*   **numpy**: For numerical operations, especially with image data.\n*   **pyvirtualcam**: For creating the virtual camera output.\n*   **PyYAML**: For loading and parsing the `settings.yaml` configuration file.\n*   **colorama**: For cross-platform colored terminal text.\n*   **psutil**: For system monitoring (optional, used in debug HUD).\n\nThe `install.sh` or `setup_windows.bat` scripts will help you install these.\n\n## Contributing\n\nContributions are welcome! Whether it's bug fixes, feature enhancements, or documentation improvements, please feel free to:\n\n1.  Fork the repository.\n2.  Create a new branch for your feature or fix (`git checkout -b feature/your-feature-name`).\n3.  Make your changes and commit them (`git commit -am 'Add some feature'`).\n4.  Push to the branch (`git push origin feature/your-feature-name`).\n5.  Open a Pull Request.\n\nPlease ensure your code adheres to general Python best practices and include tests for new functionality if applicable.\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.\n\n## Acknowledgments\n\n*   The **OpenCV** team and community for their invaluable computer vision library.\n*   Contributors to the **pyvirtualcam** library.\n*   Everyone who has provided inspiration and support for this project.\n\n---\n*Developed by Akshat Kushwaha (@a3rodev)*\n*Last Updated: May 9, 2025*\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fa3ro-dev%2Fautofaceframing","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fa3ro-dev%2Fautofaceframing","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fa3ro-dev%2Fautofaceframing/lists"}