{"id":13698812,"url":"https://github.com/610265158/faceboxes-tensorflow","last_synced_at":"2025-05-04T04:30:39.449Z","repository":{"id":39740240,"uuid":"199050788","full_name":"610265158/faceboxes-tensorflow","owner":"610265158","description":" a tensorflow  implement faceboxes","archived":false,"fork":false,"pushed_at":"2022-11-21T21:32:16.000Z","size":9516,"stargazers_count":47,"open_issues_count":7,"forks_count":18,"subscribers_count":4,"default_branch":"master","last_synced_at":"2024-11-13T03:34:51.015Z","etag":null,"topics":["faceboxes","facedetect","facedetector","tensorflow","tensorflow2"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/610265158.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-07-26T16:46:17.000Z","updated_at":"2024-05-25T18:36:41.000Z","dependencies_parsed_at":"2022-09-09T14:40:37.344Z","dependency_job_id":null,"html_url":"https://github.com/610265158/faceboxes-tensorflow","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/610265158%2Ffaceboxes-tensorflow","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/610265158%2Ffaceboxes-tensorflow/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/610265158%2Ffaceboxes-tensorflow/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/610265158%2Ffaceboxes-tensorflow/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/610265158","download_url":"https://codeload.github.com/610265158/faceboxes-tensorflow/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252288912,"owners_count":21724323,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["faceboxes","facedetect","facedetector","tensorflow","tensorflow2"],"created_at":"2024-08-02T19:00:53.317Z","updated_at":"2025-05-04T04:30:38.693Z","avatar_url":"https://github.com/610265158.png","language":"Python","readme":"# [faceboxes](https://arxiv.org/abs/1708.05234)\n\n## introduction\n\nA tensorflow 2.0 implement faceboxes. \n\n **CAUTION: this is the tensorflow2 branch, \n if you need to work on tensorflow1, \n please switch to tf1 branch**\n \n \nAnd some changes has been made in RDCL module, \nto achieve a better performance and run faster:\n \n   1. input size is 512 (1024 in the paper), then the first conv stride is 2, kernel size 7x7x12.\n   2. replace the first maxpool by conv 3x3x24 stride 2\n   3. replace the second 5x5 stride2 conv and maxpool by two 3x3 stride 2 conv\n   4. anchor based sample is used in data augmentaion.\n   \n   \n   codes like below\n   ```\n       with tf.name_scope('RDCL'):\n        net = conv2d(net_in, 12, [7, 7], stride=2,activation_fn=tf.nn.relu, scope='init_conv1')\n        net = conv2d(net, 24, [3, 3], stride=2, activation_fn=tf.nn.crelu, scope='init_conv2')\n       \n        net = conv2d(net, 32, [3, 3], stride=2,activation_fn=tf.nn.relu,scope='conv1x1_before1')\n        net = conv2d(net, 64, [3, 3], stride=2, activation_fn=tf.nn.crelu, scope='conv1x1_before2')\n        \n        return net\n\n   ```\n**I want to name it faceboxes++ ,if u don't mind**\n\n\nPretrained model can be download from:\n\n+ [baidu disk](https://pan.baidu.com/s/14glOjQYRxKL-QPPHl6HRRQ) (code zn3x )\n\n+ [google drive](https://drive.google.com/open?id=1KO2PuHiBgQEY5uOyLGdFbxBlqPAosY-s)\n\nEvaluation result on fddb\n\n ![fddb](https://github.com/610265158/faceboxes-tensorflow/blob/master/figures/Figure_1.png)\n\n| fddb   |\n| :------: | \n|  0.96 | \n\n\n **Speed: it runs over 70FPS on cpu (i7-8700K), 30FPS (i5-7200U), 140fps on gpu (2080ti) with fixed input size 512, tf2.0, multi thread.**\n **And i think the input size, the time consume and the performance is very appropriate for application :)**\n \nHope the codes can help you, contact me if u have any question,      2120140200@mail.nankai.edu.cn  .\n\n## requirment\n\n+ tensorflow2.0\n\n+ tensorpack   (data provider)\n\n+ opencv\n\n+ python 3.6\n\n## useage\n\n### train\n1. download widerface data from http://shuoyang1213.me/WIDERFACE/\nand release the WIDER_train, WIDER_val and wider_face_split into ./WIDER, \n2. download fddb, and release FDDB-folds into ./FDDB ， 2002,2003 into ./FDDB/img\n3. then run\n   ```python prepare_data.py```it will produce train.txt and val.txt\n\n    (if u like train u own data, u should prepare the data like this:\n    `...../9_Press_Conference_Press_Conference_9_659.jpg| 483(xmin),195(ymin),735(xmax),543(ymax),1(class) ......` \n    one line for one pic, **caution! class should start from 1, 0 means bg**)\n\n4. then, run:\n\n    `python train.py`\n\n    and if u want to check the data when training, u could set vis in train_config.py as True\n\n    \n### finetune\n1.  (if u like train u own data, u should prepare the data like this:\n    `...../9_Press_Conference_Press_Conference_9_659.jpg| 483(xmin),195(ymin),735(xmax),543(ymax),1(class) ......` \n    one line for one pic, **caution! class should start from 1, 0 means bg**)\n    \n2.  set config.MODEL.pretrained_model='./model/detector/variables/variables', in train_config.py,\n    and the model dir structure is :\n    ```\n    ./model/\n    ├── detector\n    │   ├── saved_model.pb\n    │   └── variables\n    │       ├── variables.data-00000-of-00001\n    │       └── variables.index\n    ```\n3.  adjust the lr policy\n\n4. `python train.py`\n\n### evaluation\n\n```\n    python test/fddb.py [--model [TRAINED_MODEL]] [--data_dir [DATA_DIR]]\n                          [--split_dir [SPLIT_DIR]] [--result [RESULT_DIR]]\n    --model              Path of the saved model,default ./model/detector\n    --data_dir           Path of fddb all images\n    --split_dir          Path of fddb folds\n    --result             Path to save fddb results\n ```\n    \nexample `python model_eval/fddb.py --model model/detector \n                                    --data_dir 'FDDB/img/' \n                                    --split_dir FDDB/FDDB-folds/ \n                                    --result 'result/' `\n\n\n### visualization\n![A demo](https://github.com/610265158/faceboxes-tensorflow/blob/master/figures/example2.png)\n\n2. `python vis.py  --img_dir your_images_dir --model model/detector `\n\n3. or use a camera:\n`python vis.py --cam_id 0 --model model/detector`\n\nYou can check the code in vis.py to make it runable, it's simple.\n\n\n### reference\n### [FaceBoxes: A CPU Real-time Face Detector with High Accuracy](https://arxiv.org/abs/1708.05234)\n\n","funding_links":[],"categories":["Projects 💛💛💛💛💛\u003ca name=\"Projects\" /\u003e"],"sub_categories":["人脸识别"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F610265158%2Ffaceboxes-tensorflow","html_url":"https://awesome.ecosyste.ms/projects/github.com%2F610265158%2Ffaceboxes-tensorflow","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F610265158%2Ffaceboxes-tensorflow/lists"}