{"id":23102847,"url":"https://github.com/dataxujing/efficientdet_pytorch","last_synced_at":"2025-08-16T15:31:28.288Z","repository":{"id":40997008,"uuid":"256107293","full_name":"DataXujing/EfficientDet_pytorch","owner":"DataXujing","description":":art: :art: EfficientDet训练水下目标检测数据集:art::art:","archived":false,"fork":false,"pushed_at":"2022-01-02T12:11:14.000Z","size":41359,"stargazers_count":28,"open_issues_count":5,"forks_count":13,"subscribers_count":3,"default_branch":"master","last_synced_at":"2023-03-05T13:24:30.134Z","etag":null,"topics":["efficientdet","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"lgpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DataXujing.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-04-16T04:16:32.000Z","updated_at":"2023-02-05T13:10:57.000Z","dependencies_parsed_at":"2022-09-20T18:57:53.197Z","dependency_job_id":null,"html_url":"https://github.com/DataXujing/EfficientDet_pytorch","commit_stats":null,"previous_names":[],"tags_count":null,"template":null,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DataXujing%2FEfficientDet_pytorch","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DataXujing%2FEfficientDet_pytorch/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DataXujing%2FEfficientDet_pytorch/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DataXujing%2FEfficientDet_pytorch/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DataXujing","download_url":"https://codeload.github.com/DataXujing/EfficientDet_pytorch/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230043136,"owners_count":18163964,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["efficientdet","pytorch"],"created_at":"2024-12-17T00:00:36.410Z","updated_at":"2024-12-17T00:01:53.337Z","avatar_url":"https://github.com/DataXujing.png","language":"Python","readme":"\n\n## EfficientDet训练自己的数据集\n\n**Xu Jing**\n\nPaper：\u003chttps://arxiv.org/abs/1911.09070\u003e\n\nBase GitHub Repo:\u003chttps://github.com/zylo117/Yet-Another-EfficientDet-Pytorch\u003e\n\nOfficial Repo:\u003chttps://github.com/google/automl/tree/master/efficientdet\u003e\n\nEfficientDet 算法中文介绍：[EfficientDet_CN.md](./EfficientDet_CN.md)\n\n\u003e 本项目以一个真实比赛的数据集，Step by Step演示如何训练最近开源的相对SOTA的Pytorch版的EfficientDet的训练，评估，推断的过程。像paper中提到的一样，我们并没有使用任何数据增强或模型融合等后处理的trick来提高模型的精度，如果你想增加数据增强的策略可以在`efficientdet/dataset.py`中实现；\n\u003e\n\u003e 此外我们并没有采用类似于[UWGAN_UIE](https://github.com/DataXujing/UWGAN_UIE)，水质迁移（WQT），DG-YOLO或一些水下去雾算法的办法，预处理水下的图像；\n\u003e\n\u003e 相信这些trick同样会提高模型识别的精度！！！\n\n### 1.数据来源\n\n数据来源于[科赛网中的水下目标检测的比赛](https://www.kesci.com/home/competition/5e535a612537a0002ca864ac/content/2)：\n\n![](pic/data/p0.png)\n\n**大赛简介**\n\n「背景」 随着海洋观测的快速发展，水下物体检测在海军沿海防御任务以及渔业、水产养殖等海洋经济中发挥着越来越重要的作用，而水下图像是海洋信息的重要载体，本次比赛希望参赛者在真实海底图片数据中通过算法检测出不同海产品（海参、海胆、扇贝、海星）的位置。\n\n![](pic/data/p1.png)\n\n「数据」 训练集是5543张 jpg 格式的水下光学图像与对应标注结果，A榜测试集800张，B榜测试集1200张。\n\n「评估指标」 mAP（mean Average Precision）\n\n\u003e 注：数据由鹏城实验室提供。\n\n### 2.据转换\n\n我们将数据存放在项目的dataset下：\n\n```\n..\n└─underwater\n    ├─Annotations #xml标注\n    └─JPEGImages  #jpg原图\n# 首先划分训练集和验证集：我们采用9:1的随机换分，划分好的数据等待转化为COCO数据\n```\n\n划分训练集和验证集后的数据结构：\n\n```\n..\n├─train\n│  ├─Annotations\n│  └─JPEGImages\n└─val\n    ├─Annotations\n    └─JPEGImages\n```\n\n将VOC转COCO：\n\n```\npython voc2coco.py train.txt ./train/Annotations instances_train.json ./train/JPEGImages \npython voc2coco.py val.txt ./val/Annotations instances_val.json ./val/JPEGImages \n# 生成的json文件存放在了dataset/underwater/annotations/*.jpg\n```\n\n\n### 3.修改EfficientDet的项目文件\n\n1.新建dataset文件夹用以存放训练和验证数据\n\n```\ndataset\n└─underwater  # 项目数据集名称\n    ├─annotations # instances_train.json,instances_val.json\n    ├─train   # train jpgs\n    └─val   # val jpgs\n```\n\n2.新建logs文件夹\n\nlogs存放了训练过程中的tensprboardX保存的日志及模型\n\n3.修改train.py[训练使用]\n\n```\ndef get_args():\n    parser.add_argument('-p', '--project', type=str, default='underwater', help='project file that contains parameters')\n    parser.add_argument('--batch_size', type=int, default=16, help='The number of images per batch among all devices')\n\n```\n\n4.修改efficientdet_test.py[测试新图像使用]\n\n```\n# obj_list = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',\n#             'fire hydrant', '', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep',\n#             'cow', 'elephant', 'bear', 'zebra', 'giraffe', '', 'backpack', 'umbrella', '', '', 'handbag', 'tie',\n#             'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove',\n#             'skateboard', 'surfboard', 'tennis racket', 'bottle', '', 'wine glass', 'cup', 'fork', 'knife', 'spoon',\n#             'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut',\n#             'cake', 'chair', 'couch', 'potted plant', 'bed', '', 'dining table', '', '', 'toilet', '', 'tv',\n#             'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink',\n#             'refrigerator', '', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier',\n#             'toothbrush']\n\nobj_list = [\"holothurian\",\"echinus\",\"scallop\",\"starfish\"]# 换成自己的\ncompound_coef = 2 # D0-D6\nmodel.load_state_dict(torch.load(\"./logs/underwater/efficientdet-d2_122_38106.pth\")) # 模型地址\n```\n\n5.修改coco_eval.py[评估模型使用]\n\n```\nap.add_argument('-p', '--project', type=str, default='underwater', help='project file that contains parameters')\n```\n\n6.修改efficientdet/config.py \n\n```\n# COCO_CLASSES = [\"person\", \"bicycle\", \"car\", \"motorcycle\", \"airplane\", \"bus\", \"train\", \"truck\", \"boat\",\n#                 \"traffic light\", \"fire hydrant\", \"stop sign\", \"parking meter\", \"bench\", \"bird\", \"cat\", \"dog\",\n#                 \"horse\", \"sheep\", \"cow\", \"elephant\", \"bear\", \"zebra\", \"giraffe\", \"backpack\", \"umbrella\",\n#                 \"handbag\", \"tie\", \"suitcase\", \"frisbee\", \"skis\", \"snowboard\", \"sports ball\", \"kite\",\n#                 \"baseball bat\", \"baseball glove\", \"skateboard\", \"surfboard\", \"tennis racket\", \"bottle\",\n#                 \"wine glass\", \"cup\", \"fork\", \"knife\", \"spoon\", \"bowl\", \"banana\", \"apple\", \"sandwich\", \"orange\",\n#                 \"broccoli\", \"carrot\", \"hot dog\", \"pizza\", \"donut\", \"cake\", \"chair\", \"couch\", \"potted plant\",\n#                 \"bed\", \"dining table\", \"toilet\", \"tv\", \"laptop\", \"mouse\", \"remote\", \"keyboard\", \"cell phone\",\n#                 \"microwave\", \"oven\", \"toaster\", \"sink\", \"refrigerator\", \"book\", \"clock\", \"vase\", \"scissors\",\n#                 \"teddy bear\", \"hair drier\", \"toothbrush\"]\nCOCO_CLASSES = [\"holothurian\",\"echinus\",\"scallop\",\"starfish\"]\n```\n\n7.新建yml配置文件(./projects/underwater.yml)[训练的配置文件]\n\n```\nproject_name: underwater  # also the folder name of the dataset that under data_path folder\ntrain_set: train\nval_set: val\nnum_gpus: 1\n\n# mean and std in RGB order, actually this part should remain unchanged as long as your dataset is similar to coco.\nmean: [0.485, 0.456, 0.406]\nstd: [0.229, 0.224, 0.225]\n\n# this is coco anchors, change it if necessary\nanchors_scales: '[2 ** 0, 2 ** (1.0 / 3.0), 2 ** (2.0 / 3.0)]'\nanchors_ratios: '[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]'\n\n# must match your dataset's category_id.\n# category_id is one_indexed,\n# for example, index of 'car' here is 2, while category_id of is 3\nobj_list: [\"holothurian\",\"echinus\",\"scallop\",\"starfish\"]\n\n```\n\n\n\n### 4.训练EfficientDet\n\n```\n# 从头训练自己的数据集 EfficientDet-D2\npython train.py -c 2 --batch_size 16 --lr 1e4\n\n# train efficientdet-d2 在自己的数据集上使用预训练的模型(推荐)\npython train.py -c 2 --batch_size 8 --lr 1e-5 --num_epochs 10 \\\n --load_weights /path/to/your/weights/efficientdet-d2.pth\n\n# with a coco-pretrained, you can even freeze the backbone and train heads only\n# to speed up training and help convergence.\npython train.py -c 2 --batch_size 8 --lr 1e-5 --num_epochs 10 \\\n --load_weights /path/to/your/weights/efficientdet-d2.pth \\\n --head_only True\n \n# Early stopping \n#Ctrl+c, \n# the program will catch KeyboardInterrupt\n# and stop training, save current checkpoint.\n\n# 断点训练\npython train.py -c 2 --batch_size 8 --lr 1e-5 \\\n --load_weights last \\\n --head_only True\n```\n\n### 5.测试EfficientDet\n\n1.评估模型使用coco的map\n\n```\npython coco_eval.py -p underwater -c 2 -w ./logs/underwater/efficientdet-d2_122_38106.pth\n```\n\n```\n# 评价结果\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.381\n Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.714\n Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.368\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.170\n Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.351\n Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.426\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.149\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.433\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.464\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.267\n Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.429\n Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.507\n\n```\n\n2.训练过程中的Debug\n\n```\n# when you get bad result, you need to debug the training result.\npython train.py -c 2 --batch_size 8 --lr 1e-5 --debug True\n\n# then checkout test/ folder, there you can visualize the predicted boxes during training\n# don't panic if you see countless of error boxes, it happens when the training is at early stage.\n# But if you still can't see a normal box after several epoches, not even one in all image,\n# then it's possible that either the anchors config is inappropriate or the ground truth is corrupted.\n```\n\n3.推断新的图像\n\n```\npython efficientdet_test.py\n```\n\n推断速度基本能达到实时：\n\n![](pic/data/p2.png)\n\n![](pic/data/img_test1.jpg)\n\n![](pic/data/img_test2.jpg)\n\n4.Tensorboard展示结果：\n\n```\ntensorboard --logdir logs/underwater/tensorboard\n```\n\n![](pic/data/p3.png)\n\n![](pic/data/p4.png)\n\n![](pic/data/p5.png)","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdataxujing%2Fefficientdet_pytorch","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdataxujing%2Fefficientdet_pytorch","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdataxujing%2Fefficientdet_pytorch/lists"}