{"id":13444199,"url":"https://github.com/calmiLovesAI/Basic_CNNs_TensorFlow2","last_synced_at":"2025-03-20T18:31:21.144Z","repository":{"id":47509234,"uuid":"207753171","full_name":"calmiLovesAI/Basic_CNNs_TensorFlow2","owner":"calmiLovesAI","description":"A tensorflow2 implementation of some basic CNNs(MobileNetV1/V2/V3, EfficientNet, ResNeXt, InceptionV4, InceptionResNetV1/V2, SENet, SqueezeNet, DenseNet, ShuffleNetV2, ResNet).","archived":false,"fork":false,"pushed_at":"2021-11-24T04:27:38.000Z","size":152,"stargazers_count":522,"open_issues_count":27,"forks_count":177,"subscribers_count":19,"default_branch":"master","last_synced_at":"2024-05-20T10:10:18.469Z","etag":null,"topics":["densenet","efficientnet","image-classification","image-recognition","inception-resnet-v2","inception-v4","mobilenet-v1","mobilenet-v2","mobilenet-v3","regnet","resnet","resnext","senet","seresnet","shufflenet-v2","squeezenet","tensorflow","tensorflow2"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/calmiLovesAI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-09-11T07:43:19.000Z","updated_at":"2024-05-18T10:38:19.000Z","dependencies_parsed_at":"2022-09-24T14:11:04.984Z","dependency_job_id":null,"html_url":"https://github.com/calmiLovesAI/Basic_CNNs_TensorFlow2","commit_stats":null,"previous_names":["calmilovesai/basic_cnns_tensorflow2","calmisential/basic_cnns_tensorflow2"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calmiLovesAI%2FBasic_CNNs_TensorFlow2","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calmiLovesAI%2FBasic_CNNs_TensorFlow2/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calmiLovesAI%2FBasic_CNNs_TensorFlow2/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/calmiLovesAI%2FBasic_CNNs_TensorFlow2/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/calmiLovesAI","download_url":"https://codeload.github.com/calmiLovesAI/Basic_CNNs_TensorFlow2/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244670120,"owners_count":20490919,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["densenet","efficientnet","image-classification","image-recognition","inception-resnet-v2","inception-v4","mobilenet-v1","mobilenet-v2","mobilenet-v3","regnet","resnet","resnext","senet","seresnet","shufflenet-v2","squeezenet","tensorflow","tensorflow2"],"created_at":"2024-07-31T03:02:21.652Z","updated_at":"2025-03-20T18:31:20.884Z","avatar_url":"https://github.com/calmiLovesAI.png","language":"Python","funding_links":[],"categories":["Python"],"sub_categories":[],"readme":"# Basic_CNNs_TensorFlow2\nA tensorflow2 implementation of some basic CNNs.\n\n## Networks included:\n+ MobileNet_V1\n+ MobileNet_V2\n+ [MobileNet_V3](https://github.com/calmisential/MobileNetV3_TensorFlow2)\n+ [EfficientNet](https://github.com/calmisential/EfficientNet_TensorFlow2)\n+ [ResNeXt](https://github.com/calmisential/ResNeXt_TensorFlow2)\n+ [InceptionV4, InceptionResNetV1, InceptionResNetV2](https://github.com/calmisential/InceptionV4_TensorFlow2)\n+ SE_ResNet_50, SE_ResNet_101, SE_ResNet_152, SE_ResNeXt_50, SE_ResNeXt_101\n+ SqueezeNet\n+ [DenseNet](https://github.com/calmisential/DenseNet_TensorFlow2)\n+ ShuffleNetV2\n+ [ResNet](https://github.com/calmisential/TensorFlow2.0_ResNet)\n+ RegNet\n\n## Other networks\nFor AlexNet and VGG, see : https://github.com/calmisential/TensorFlow2.0_Image_Classification\u003cbr/\u003e\nFor InceptionV3, see : https://github.com/calmisential/TensorFlow2.0_InceptionV3\u003cbr/\u003e\nFor ResNet, see : https://github.com/calmisential/TensorFlow2.0_ResNet\n\n## Train\n1. Requirements:\n+ Python \u003e= 3.9\n+ Tensorflow \u003e= 2.7.0\n+ tensorflow-addons \u003e= 0.15.0\n2. To train the network on your own dataset, you can put the dataset under the folder **original dataset**, and the directory should look like this:\n```\n|——original dataset\n   |——class_name_0\n   |——class_name_1\n   |——class_name_2\n   |——class_name_3\n```\n3. Run the script **split_dataset.py** to split the raw dataset into train set, valid set and test set. The dataset directory will be like this:\n ```\n|——dataset\n   |——train\n        |——class_name_1\n        |——class_name_2\n        ......\n        |——class_name_n\n   |——valid\n        |——class_name_1\n        |——class_name_2\n        ......\n        |——class_name_n\n   |—-test\n        |——class_name_1\n        |——class_name_2\n        ......\n        |——class_name_n\n ```\n4. Run **to_tfrecord.py** to generate tfrecord files.\n5. Change the corresponding parameters in **config.py**.\n6. Run **show_model_list.py** to get the index of model.\n7. Run **python train.py --idx [index]** to start training.\u003cbr/\u003e\nIf you want to train the *EfficientNet*, you should change the IMAGE_HEIGHT and IMAGE_WIDTH before training.\n- b0 = (224, 224)\n- b1 = (240, 240)\n- b2 = (260, 260)\n- b3 = (300, 300)\n- b4 = (380, 380)\n- b5 = (456, 456)\n- b6 = (528, 528)\n- b7 = (600, 600)\n## Evaluate\nRun **python evaluate.py --idx [index]** to evaluate the model's performance on the test dataset.\n\n## Different input image sizes for different neural networks\n\u003ctable\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003cth\u003eType\u003c/th\u003e\n          \u003cth\u003eNeural Network\u003c/th\u003e\n          \u003cth\u003eInput Image Size (height * width)\u003c/th\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd rowspan=\"3\"\u003eMobileNet\u003c/td\u003e\n          \u003ctd\u003eMobileNet_V1\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eMobileNet_V2\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eMobileNet_V3\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eEfficientNet\u003c/td\u003e\n          \u003ctd\u003eEfficientNet(B0~B7)\u003c/td\u003e\n          \u003ctd\u003e/\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd rowspan=\"2\"\u003eResNeXt\u003c/td\u003e\n          \u003ctd\u003eResNeXt50\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eResNeXt101\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd rowspan=\"2\"\u003eSEResNeXt\u003c/td\u003e\n          \u003ctd\u003eSEResNeXt50\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eSEResNeXt101\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd rowspan=\"3\"\u003eInception\u003c/td\u003e\n          \u003ctd\u003eInceptionV4\u003c/td\u003e\n          \u003ctd\u003e(299 * 299)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eInception_ResNet_V1\u003c/td\u003e\n          \u003ctd\u003e(299 * 299)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eInception_ResNet_V2\u003c/td\u003e\n          \u003ctd\u003e(299 * 299)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd rowspan=\"3\"\u003eSE_ResNet\u003c/td\u003e\n          \u003ctd\u003eSE_ResNet_50\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eSE_ResNet_101\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eSE_ResNet_152\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003c/tr align=\"center\"\u003e\n          \u003ctd\u003eSqueezeNet\u003c/td\u003e\n          \u003ctd align=\"center\"\u003eSqueezeNet\u003c/td\u003e\n          \u003ctd align=\"center\"\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd rowspan=\"4\"\u003eDenseNet\u003c/td\u003e\n          \u003ctd\u003eDenseNet_121\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eDenseNet_169\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eDenseNet_201\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eDenseNet_269\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eShuffleNetV2\u003c/td\u003e\n          \u003ctd\u003eShuffleNetV2\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd rowspan=\"5\"\u003eResNet\u003c/td\u003e\n          \u003ctd\u003eResNet_18\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eResNet_34\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eResNet_50\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eResNet_101\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n     \u003ctr align=\"center\"\u003e\n          \u003ctd\u003eResNet_152\u003c/td\u003e\n          \u003ctd\u003e(224 * 224)\u003c/td\u003e\n     \u003c/tr\u003e\n\u003c/table\u003e\n\n## References\n1. MobileNet_V1: [Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)\n2. MobileNet_V2: [Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)\n3. MobileNet_V3: [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)\n4. EfficientNet: [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)\n5. The official code of EfficientNet: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet\n6. ResNeXt: [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431)\n7. Inception_V4/Inception_ResNet_V1/Inception_ResNet_V2: [Inception-v4,  Inception-ResNet and the Impact of Residual Connectionson Learning](https://arxiv.org/abs/1602.07261)\n8. The official implementation of Inception_V4: https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py\n9. The official implementation of Inception_ResNet_V2: https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py\n10. SENet: [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)\n11. SqueezeNet: [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \u003c0.5MB model size](https://arxiv.org/abs/1602.07360)\n12. DenseNet: [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993)\n13. https://zhuanlan.zhihu.com/p/37189203\n14. ShuffleNetV2: [ShuffleNet V2: Practical Guidelines for Eﬃcient CNN Architecture Design\n](https://arxiv.org/abs/1807.11164)\n15. https://zhuanlan.zhihu.com/p/48261931\n16. ResNet: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)\n17. RegNet: [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678)","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FcalmiLovesAI%2FBasic_CNNs_TensorFlow2","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FcalmiLovesAI%2FBasic_CNNs_TensorFlow2","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FcalmiLovesAI%2FBasic_CNNs_TensorFlow2/lists"}