{"id":13698780,"url":"https://github.com/burnpiro/tiny-face-detection-tensorflow2","last_synced_at":"2025-10-29T06:08:32.542Z","repository":{"id":91873629,"uuid":"221500469","full_name":"burnpiro/tiny-face-detection-tensorflow2","owner":"burnpiro","description":"Tiny Tensorflow 2 face detector","archived":false,"fork":false,"pushed_at":"2020-02-17T16:13:07.000Z","size":745,"stargazers_count":27,"open_issues_count":1,"forks_count":8,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-03-22T21:07:45.481Z","etag":null,"topics":["artificial-intelligence","face-detection","machine-learning","tensorboard","tensorflow2"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/burnpiro.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2019-11-13T16:11:48.000Z","updated_at":"2024-07-13T15:32:01.000Z","dependencies_parsed_at":"2024-04-08T02:58:36.531Z","dependency_job_id":"827adbaf-afa5-4e99-bb86-55cfef22f977","html_url":"https://github.com/burnpiro/tiny-face-detection-tensorflow2","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/burnpiro%2Ftiny-face-detection-tensorflow2","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/burnpiro%2Ftiny-face-detection-tensorflow2/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/burnpiro%2Ftiny-face-detection-tensorflow2/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/burnpiro%2Ftiny-face-detection-tensorflow2/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/burnpiro","download_url":"https://codeload.github.com/burnpiro/tiny-face-detection-tensorflow2/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247687939,"owners_count":20979570,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","face-detection","machine-learning","tensorboard","tensorflow2"],"created_at":"2024-08-02T19:00:52.853Z","updated_at":"2025-10-29T06:08:27.509Z","avatar_url":"https://github.com/burnpiro.png","language":"Python","readme":"# Tiny Face Detection with TensorFlow 2.0\n\n![alt text][image]\n\n### Quick start\n\n- Install tensorflow and other `requirements.txt`\n- Get [DataSet](#dataset)\n- run `python train.py` (takes a while, depends on your machine)\n- run `python detect.py --image my_image.jpg`\n\n### Important files\n\n- [`./data/data_generator.py`](#data_generatorpy) - generates train/val data from WIDER FACE\n- [`./model/model.py`](#modelpy) - generates TF model\n- [`./model/loss.py`](#losspy) - definition of the **loss function** for training\n- [`./model/validation.py`](#validationpy) - definition of the validation for training\n- [`./config.py`](#configpy) - stores network/training/validation config for network\n- [`./detect.py`](#detectpy) - runs model against given image and generates output image\n- `./draw_boxes` - helper function for `./detect.py`, draws boxes on cv2 img\n- `./print_model.py` - prints current model structure\n- [`./train.py`](#trainpy) - starts training our model and create weights base on training results and validation function\n\n### Dataset\n\nWe want to use WIDER FACE dataset. It contain over 32k images with almost 400k faces and is publicly available on\n[http://shuoyang1213.me/WIDERFACE/](http://shuoyang1213.me/WIDERFACE/)\n\n- [Training Data GDrive](https://drive.google.com/file/d/0B6eKvaijfFUDQUUwd21EckhUbWs/view?usp=sharing)\n- [Val Data GDrive](https://drive.google.com/file/d/0B6eKvaijfFUDd3dIRmpvSk8tLUk/view?usp=sharing)\n- [Test Data GDrive](https://drive.google.com/file/d/0B6eKvaijfFUDbW4tdGpaYjgzZkU/view?usp=sharing)\n- [Annotations](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/bbx_annotation/wider_face_split.zip)\n\nPlease put all the data into `./data` folder.\n\nData structure is described in `./data/wider_face_split/readme.txt`. We only need to use boxes annotations but there is more data available if someone wants to use it.\n\n### Files\n\n#### data_generator.py\n\n@config_path - path to `data/wider_face_split/wider_face_train_bbx_gt.txt` file (defined in `cfg.TRAIN.ANNOTATION_PATH`)\n@file_path - path to folder with images (defined in `cgf.TRAIN.DATA_PATH`)\n\n- `__init__(file_path, config_path, debug=False)` loops over all images in txt file (base on `config_path`) and stores them inside generator to be retrieved by `__getitem__`\n- `__len__()` unsurprisingly returns length of our data (exactly number of batches `data/batch_size)\n- `__getitem__(idx)` - returns data for given `idx`, data returned as `Array\u003cimagePath\u003e, Array\u003ch, w, yc, xc, class\u003e`\n\n#### model.py\n\n- `create_model(trainable=False)` - creates model base on definition, if you want model to be fully trainable (not only output layers) then set `trainable` to be `True`\n\n#### loss.py\n\n- `loss(y_true, y_pred)` - returns value of **loss function** for current prediction (`y_true` is a box from dataset, `y_pred` is a output from NN)\n- `get_box_highest_percentage(arr)` - helper function for `loss` to get best box match\n\n#### validation.py\n\n- `on_epoch_end(self, epoch, logs)` - calculates `IoU` and `mse` for validation set\n- `get_box_highets_percentage(self, mask)` - helper function, you can ignore it\n\n#### config.py\n\nJust a config, there is couple of important things in it:\n- `ALPHA` - mobilenet's \"alpha\" size, higher value means more complex network (slower, more precise)\n- `GRID_SIZE` - output grid size, **7** is a good value for low ALPHA but you might want to set it to higher value for larger ALPHAs and add UpSample layer to model.py\n- `INPUT_SIZE` - value should be adjusted base on initial network used (**224** for MobileNetV2, but check input size if you changing model)\n \nInside `TRAIN` prefix there is couple training hyperparameters you can adjust for training\n\n#### detect.py\n\nYou have to first train model to get at least one `model-0.xx.h5` weights file\n\nUsage:\n```bash\n# basic usage\npython detect.py --image path_to_my_image.jpg\n\n# use different trained weights and output path\npython detect.py --image path_to_my_image.jpg --weights model-0.64.h5 --output output_path.jpg\n```\n\n#### train.py\n\nThere is no parameters for it but you might want to read that file. It's running base on `config.py` and other files already described. If you want to train your model from specific point then uncomment `IF TRAINABLE` and add weights file.\n\nAfter running training script will generate `./logs/fit/**` files. You can use **Tensorboard** for visualise training\n\n```bash\ntensorboard --logdir logs/fit\n```\n\n\n[image]: ./example_output.jpg \"Sample Output\"","funding_links":[],"categories":["Projects 💛💛💛💛💛\u003ca name=\"Projects\" /\u003e"],"sub_categories":["目标检测"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fburnpiro%2Ftiny-face-detection-tensorflow2","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fburnpiro%2Ftiny-face-detection-tensorflow2","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fburnpiro%2Ftiny-face-detection-tensorflow2/lists"}