{"id":28174659,"url":"https://github.com/bbs1412/distributedattendancesystem","last_synced_at":"2025-10-09T09:19:32.814Z","repository":{"id":290817754,"uuid":"913649889","full_name":"Bbs1412/DistributedAttendanceSystem","owner":"Bbs1412","description":"This is the distributed processing version of the 'Smart Attendance System' project.","archived":false,"fork":false,"pushed_at":"2025-05-12T00:00:51.000Z","size":982,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-07-29T22:53:55.940Z","etag":null,"topics":["attendance-system","distributed-computing","image-processing","machine-learning","parallel-processing","socket"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Bbs1412.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-01-08T05:07:49.000Z","updated_at":"2025-05-12T00:00:54.000Z","dependencies_parsed_at":null,"dependency_job_id":"0f832be1-1f98-4010-9498-abb27f2e5834","html_url":"https://github.com/Bbs1412/DistributedAttendanceSystem","commit_stats":null,"previous_names":["bbs1412/distributedattendancesystem"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Bbs1412/DistributedAttendanceSystem","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bbs1412%2FDistributedAttendanceSystem","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bbs1412%2FDistributedAttendanceSystem/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bbs1412%2FDistributedAttendanceSystem/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bbs1412%2FDistributedAttendanceSystem/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Bbs1412","download_url":"https://codeload.github.com/Bbs1412/DistributedAttendanceSystem/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Bbs1412%2FDistributedAttendanceSystem/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279001122,"owners_count":26083021,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-09T02:00:07.460Z","response_time":59,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["attendance-system","distributed-computing","image-processing","machine-learning","parallel-processing","socket"],"created_at":"2025-05-15T22:17:11.519Z","updated_at":"2025-10-09T09:19:32.799Z","avatar_url":"https://github.com/Bbs1412.png","language":"Python","readme":"# Distributed Attendance System\nThis is the distributed processing version of the [Smart Attendance System](https://github.com/Bbs1412/SmartAttendanceSystem) project, utilizing distributed processing for attendance calculation through face recognition. \n\n\n\u003e [!IMPORTANT]  \n\u003e - This README contains demonstration videos which your browser or device might not support.  \n\u003e - In case the demo videos are not visible, try refreshing the page.\n\u003e - If they remain unplayable, switch to a different browser or try accessing from another device.\n\n\n## Index:\n- [Distributed Attendance System](#distributed-attendance-system)  \n- [Project Overview](#-project-overview)\n    - [Aim](#aim)\n    - [Methodology](#methodology)\n    - [Features](#features)\n    - [Tech Stack](#tech-stack)\n- [Steps to run](#-steps-to-run)\n    - [Server Setup](#server-setup)\n    - [Client Setup](#client-setup)\n- [Extras](#-extras)\n- [Contributions](#-contributions)\n- [License](#-license)\n- [Contact](#-contact)\n\n\n## 🎯 Project Overview:\n### Aim:\n+ Leverage the power of multiple clients to process video frames for attendance marking using face recognition.\n+ Implement both Static and Dynamic load balancing for efficient processing and faster results.\n\n### Methodology:\n1. **Server Initialization:**\n   - A web server starts the distributed server, which connects to multiple clients (as configured in the `.env` file).\n   - Initialization includes:\n        - Accepting client connections and registering their details.\n        - Sending pre-trained face models and essential files to each client.\n        - \u003cvideo src=\"https://github.com/user-attachments/assets/8cdd03d5-21d9-45b0-a7d2-24460826e706\" type=\"video/mp4\" alt=\"Server-Initialization-Video\"\u003e\u003c/video\u003e \n        - Setting clients on standby mode for task distribution.\n        - \u003cvideo src=\"https://github.com/user-attachments/assets/805b6065-dc4c-4d29-b5fe-b1482ca0b318\" type=\"video/mp4\" alt=\"Client-Initialization-Video\"\u003e\u003c/video\u003e\n\n2. **Task Distribution:**\n   - The server accepts video uploads from the frontend.\n   - Tasks are assigned to clients based on the selected load balancing mode:\n        1. **`Static Load Balancing:`** \n            - Tasks are evenly distributed before processing begins. \n            - All clients must finish their tasks before results can be combined.\n            - \u003cvideo src=\"https://github.com/user-attachments/assets/3729f1da-a817-4407-933e-c4abce047f8f\" type=\"video/mp4\" alt=\"Client-Static-Load-Balancing-Video\"\u003e\u003c/video\u003e\n            - Means, the server has to wait for all clients to complete the task.\n            - One client has been purposely delayed to simulate the difference in processing time.\n            - \u003cvideo src=\"https://github.com/user-attachments/assets/8283a7fc-f184-428e-b0a1-aa42452f79b3\" type=\"video/mp4\" alt=\"Server-Static-Load-Balancing-Video\"\u003e\u003c/video\u003e\n\n        2. **`Dynamic Load Balancing:`** \n            - Tasks are assigned based on client processing speed in real-time, ensuring efficient resource utilization.\n            - \u003cvideo src=\"https://github.com/user-attachments/assets/35abba42-e2b4-4d7b-a4cb-eb9b8be33d77\" type=\"video/mp4\" alt=\"Client-Dynamic-Load-Balancing-Video\"\u003e\u003c/video\u003e\n            - All the clients finish the task approximately at the same time.\n            - \u003cvideo src=\"https://github.com/user-attachments/assets/982ee33e-b05b-42b0-9ed9-3ab65ec481e1\" type=\"video/mp4\" alt=\"Server-Dynamic-Load-Balancing-Video\"\u003e\u003c/video\u003e\n\n3. **Processing:**\n   - Clients process video frames using OpenCV and `face_recognition`.\n   - Results are returned to the server.\n   - Separate results from clients are combined, and rendered as attendance data.\n\n4. **Output:**\n   - Results are displayed on a web page and can be downloaded in Excel format as well.\n\n\n### Features:\n- **Web-based Interface:** Upload videos and view/download attendance results.\n- **Parallel Processing:** Faster processing through distributed clients.\n- **Customizable Load Balancing:** Switch between static and dynamic modes.\n- **Thread locking:** For consistent read-write operations on shared resources.\n- **Accurate Attendance Marking:** Threshold-based attendance marking ensures precision.\n- **Detailed Reporting:** Faculty can access detailed results and downloadable attendance records.\n\n\n### Tech Stack:\n+ **Backend:** Python, Flask\n+ **Frontend:** HTML, CSS, JavaScript\n+ **Libraries:** OpenCV, face_recognition, threading, socket\n+ **Data Formats:** JSON, Excel\n+ **Tools:** Virtual environment (venv), Python's standard libraries\n\n---\n\n## 🚀 Steps to run:\n\n### Server Setup:\n1. Clone the repository:\n    ```bash\n    git clone --depth 1 https://github.com/Bbs1412/DistributedAttendanceSystem.git\n    ```\n    \n1. Navigate to the project directory:\n    ```bash\n    cd DistributedAttendanceSystem\n    ```\n\n1. Create a virtual environment and install dependencies:\n    ```bash\n    python -m venv venv\n    venv\\Scripts\\activate\n    pip install -r \"requirements_all.txt\"\n    ```\n\n1. Configure the number of clients in the `.env` file:\n    ```js\n    no_of_clients = 2\n    ```\n\n1. Train the face recognition models:\n    - Create a folder named `Pics` in the project directory and add the images of the people you want to recognize in the `Pics` folder.\n    - Update the ***people*** list in `face_train.py` (~line 73)::\n        ```Python\n        Person(\n            reg='registration_number',\n            name='Name',\n            image='person_name.jpg',      # Image should be in the 'Pics' folder\n            display_name='Display Name',  # optional\n            pickle_name='person_name.pkl' # optional\n        )\n        ```\n    - Run the training script:\n        ```bash\n        python face_train.py\n        ```\n\n1. Start the web server:\n    ```bash\n    python app.py\n    ```\n\n1. Connect clients:\n    - Run the `distributed_client.py` on all the clients within span of set timeout.\n\n7. Open the browser at:\n    ```plaintext\n    http://localhost:5000\n    ```\n\n### Client Setup:\n1. Clone the repository:\n    ```bash\n    git clone --depth 1 https://github.com/Bbs1412/DistributedAttendanceSystem.git\n    ```\n\n1. Copy `networking.py` and `logger.py` from the root directory to `Client/` directory.\n\n1. Navigate to the client directory:\n    ```bash\n    cd DistributedAttendanceSystem/Client\n    ```\n\n1. Create a virtual environment and install dependencies:\n    ```bash\n    python -m venv venv\n    venv\\Scripts\\activate\n    cp ../requirements_all.txt .\n    pip install -r \"requirements_all.txt\"\n    ```\n\n1. Rest all files outside `Client/` can be deleted.\n\n1. Run the client once main server is up:\n    ```bash\n    python distributed_client.py\n    ```\n\n1. Repeat the above steps for all the clients.\n\n---\n\n## 📝 Extras:\n- **`Networking module:`** \n    - The [`networking.py`](networking.py) module provides abstraction for client-server communication.\n    - It uses a structured JSON based custom protocol, which can be checked in [`Protocol.json`](protocol.json) file.\n    - Includes functions for sending and receiving data over sockets.\n    - Robust error handling and logging are implemented for better debugging.\n    - Sender-side: Retries sending the same message up to 3 times if an error occurs (can be adjusted in the `networking.py` file).\n    - Receiver-side: Sends `NACK` (negative acknowledgment) to the sender if an error is detected, prompting the sender to resend the data.\n    - Data sent in parts over the network is reassembled at the receiver's end.\n\n- **`Logger module:`** \n    - The [`logger.py`](logger.py) module is also implemented using the same protocol. \n    - It provides logging functionality for efficient debugging and status tracking of the distributed system.\n    - Can be disabled in high performance scenarios, as it may impact the performance due to frequent I/O operations.\n\n\n---\n\n## 🤝 Contributions:\n   Any contributions or suggestions are welcome! \n\n\n## 📜 License: \n[![Code-License](https://img.shields.io/badge/License%20-GNU%20--%20GPL%20v3.0-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n- This project is licensed under the `GNU General Public License v3.0`\n- See the [LICENSE](LICENSE) file for details.\n- You can use the code with proper credits to the author.\n\n\n## 📧 Contact:\n- **Email -** [bhushanbsongire@gmail.com](mailto:bhushanbsongire@gmail.com)\n\n\n---\n\u003c!-- keep this:\n+ Web server at starts a separate distributed server.\n+ Distributed server first connects to multiple clients (which can be adjusted dynamically in the .env file) for processing the video.\n+ Initialization phase is completed first:\n  \u003cdetails open\u003e\n    \u003csummary\u003eExpand for details\u003c/summary\u003e\n    \u003cul\u003e \n      \u003cli\u003e Client connection it accepted and name is sent   to server. \u003c/li\u003e\n      \u003cli\u003e Pre trained face models are sent to client. \u003c/li\u003e\n      \u003cli\u003e\u003cvideo width=\"100%\" controls src=\"https://github.com/user-attachments/assets/ef51f1ba-2109-4f6d-b57e-3d4163b8a152\" type=\"video/mp4\"\u003e\u003c/li\u003e\n    \u003c/ul\u003e\n  \u003c/details\u003e \n+ as some\n--\u003e","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbbs1412%2Fdistributedattendancesystem","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbbs1412%2Fdistributedattendancesystem","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbbs1412%2Fdistributedattendancesystem/lists"}