{"id":13737487,"url":"https://github.com/WXinlong/DenseCL","last_synced_at":"2025-05-08T14:31:43.626Z","repository":{"id":39491280,"uuid":"343650432","full_name":"WXinlong/DenseCL","owner":"WXinlong","description":"Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.","archived":false,"fork":false,"pushed_at":"2023-12-26T14:19:49.000Z","size":556,"stargazers_count":547,"open_issues_count":23,"forks_count":69,"subscribers_count":7,"default_branch":"main","last_synced_at":"2024-11-13T05:02:48.908Z","etag":null,"topics":["cvpr2021","dense-contrastive-learning","densecl","self-supervised-learning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/WXinlong.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2021-03-02T04:56:38.000Z","updated_at":"2024-11-12T13:33:10.000Z","dependencies_parsed_at":"2024-01-06T17:49:11.502Z","dependency_job_id":"5c7675e1-7fbe-4951-a633-2f963a85bc5f","html_url":"https://github.com/WXinlong/DenseCL","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/WXinlong%2FDenseCL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/WXinlong%2FDenseCL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/WXinlong%2FDenseCL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/WXinlong%2FDenseCL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/WXinlong","download_url":"https://codeload.github.com/WXinlong/DenseCL/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224737406,"owners_count":17361345,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cvpr2021","dense-contrastive-learning","densecl","self-supervised-learning"],"created_at":"2024-08-03T03:01:49.865Z","updated_at":"2025-05-08T14:31:43.613Z","avatar_url":"https://github.com/WXinlong.png","language":"Python","readme":"# Dense Contrastive Learning for Self-Supervised Visual Pre-Training\n\nThis project hosts the code for implementing the DenseCL algorithm for self-supervised representation learning.\n\n\u003e [**Dense Contrastive Learning for Self-Supervised Visual Pre-Training**](https://arxiv.org/abs/2011.09157),  \n\u003e Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, Lei Li   \n\u003e In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2021, **Oral**  \n\u003e *arXiv preprint ([arXiv 2011.09157](https://arxiv.org/abs/2011.09157))*   \n\n![highlights2](highlights2.png)\n\n## Highlights\n- **Boosting dense predictions:**  DenseCL pre-trained models largely benefit dense prediction tasks including object detection and semantic segmentation (up to +2% AP and +3% mIoU).\n- **Simple implementation:** The core part of DenseCL can be implemented in 10 lines of code, thus being easy to use and modify.\n- **Flexible usage:** DenseCL is decoupled from the data pre-processing, thus enabling fast and flexible training while being agnostic about what kind of augmentation is used and how the images are sampled.\n- **Efficient training:**  Our method introduces negligible computation overhead (only \u003c1% slower) compared to the baseline method.\n\n![highlights](highlights.png)\n\n## Updates\n   - [Simple tutorial](https://github.com/aim-uofa/AdelaiDet/blob/master/configs/DenseCL/README.md) for using DenseCL in AdelaiDet (e.g., with SOLOv2 and FCOS) is provided. (05/16/2021)\n   - Code and pre-trained models of DenseCL are released. (02/03/2021)\n\n\n## Installation\nPlease refer to [INSTALL.md](docs/INSTALL.md) for installation and dataset preparation.\n\n## Models\nFor your convenience, we provide the following pre-trained models on COCO or ImageNet.\n\npre-train method | pre-train dataset | backbone | #epoch | training time | VOC det | VOC seg | Link\n--- |:---:|:---:|:---:|:---:|:---:|:---:|:---:\nMoCo-v2 | COCO | ResNet-50 | 800 | 1.0d | 54.7 | 64.5 | \nDenseCL | COCO | ResNet-50 | 800 | 1.0d | 56.7 | 67.5 | [download](https://huggingface.co/xinlongwang/DenseCL/resolve/main/densecl_r50_coco_800ep.pth?download=true)\nDenseCL | COCO | ResNet-50 | 1600 | 2.0d | 57.2 | 68.0 | [download](https://huggingface.co/xinlongwang/DenseCL/resolve/main/densecl_r50_coco_1600ep.pth?download=true)\nMoCo-v2 | ImageNet | ResNet-50 | 200 | 2.3d | 57.0 | 67.5 | \nDenseCL | ImageNet | ResNet-50 | 200 | 2.3d | 58.7 | 69.4 | [download](https://huggingface.co/xinlongwang/DenseCL/resolve/main/densecl_r50_imagenet_200ep.pth?download=true)\nDenseCL | ImageNet | ResNet-101 | 200 | 4.3d | 61.3 | 74.1 | [download](https://huggingface.co/xinlongwang/DenseCL/resolve/main/densecl_r101_imagenet_200ep.pth?download=true)\n\n**Note:** \n- The metrics for VOC det and seg are AP (COCO-style) and mIoU. The results are averaged over 5 trials.\n- The training time is measured on 8 V100 GPUs.\n- See our paper for more results on different benchmarks.\n\nWe also provide experiments of using DenseCL in AdelaiDet models, e.g., SOLOv2 and FCOS. Please refer to the [instructions](https://github.com/aim-uofa/AdelaiDet/blob/master/configs/DenseCL/README.md) for simple usage.\n\n- SOLOv2 on COCO Instance Segmentation\n\npre-train method | pre-train dataset  |  mask AP | \n--- |:---:|:---:|\nSupervised  | ImageNet | 35.2  \nMoCo-v2  | ImageNet | 35.2\nDenseCL |  ImageNet | 35.7 (+0.5)\n\n- FCOS on COCO Object Detection\n\npre-train method | pre-train dataset  |  box AP | \n--- |:---:|:---:|\nSupervised   | ImageNet | 39.9\nMoCo-v2  | ImageNet | 40.3\nDenseCL |  ImageNet | 40.9 (+1.0)\n\n\n## Usage\n\n### Training\n    ./tools/dist_train.sh configs/selfsup/densecl/densecl_coco_800ep.py 8\n\n### Extracting Backbone Weights\n    WORK_DIR=work_dirs/selfsup/densecl/densecl_coco_800ep/\n    CHECKPOINT=${WORK_DIR}/epoch_800.pth\n    WEIGHT_FILE=${WORK_DIR}/extracted_densecl_coco_800ep.pth\n    \n    python tools/extract_backbone_weights.py ${CHECKPOINT} ${WEIGHT_FILE}\n\n### Transferring to Object Detection and Segmentation\nPlease refer to [README.md](benchmarks/detection/README.md) for transferring to object detection and semantic segmentation.\nPlease refer to the [instructions](https://github.com/aim-uofa/AdelaiDet/blob/master/configs/DenseCL/README.md) for transferring to dense prediction models in AdelaiDet, e.g., SOLOv2 and FCOS.\n\n### Tips\n- After extracting the backbone weights, the model can be used to replace the original ImageNet pre-trained model as initialization for many dense prediction tasks. \n- If your machine has a slow data loading issue, especially for ImageNet, your are suggested to convert ImageNet to lmdb format through [folder2lmdb_imagenet.py](tools/folder2lmdb_imagenet.py) or  [folder2lmdb_coco.py](tools/folder2lmdb_coco.py), and use this [config_imagenet](configs/selfsup/densecl/densecl_imagenet_lmdb_200ep.py) or [config_coco](configs/selfsup/densecl/densecl_coco_lmdb_800ep.py) for training. \n\n## Acknowledgement\nWe would like to thank the [OpenSelfSup](https://github.com/open-mmlab/OpenSelfSup) for its open-source project and [PyContrast](https://github.com/HobbitLong/PyContrast) for its detection evaluation configs.\n\n## Citations\nPlease consider citing our paper in your publications if the project helps your research. BibTeX reference is as follow.\n```\n@inproceedings{wang2020DenseCL,\n  title={Dense Contrastive Learning for Self-Supervised Visual Pre-Training},\n  author={Wang, Xinlong and Zhang, Rufeng and Shen, Chunhua and Kong, Tao and Li, Lei},\n  booktitle =  {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},\n  year={2021}\n}\n```\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FWXinlong%2FDenseCL","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FWXinlong%2FDenseCL","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FWXinlong%2FDenseCL/lists"}