{"id":13576181,"url":"https://github.com/YutingLi0606/SURE","last_synced_at":"2025-04-05T05:31:04.641Z","repository":{"id":225320625,"uuid":"765659973","full_name":"YutingLi0606/SURE","owner":"YutingLi0606","description":"“SURE: SUrvey REcipes for building reliable and robust deep networks” (CVPR 2024) \u0026 (ECCV 2024 OOD-CV Challenge Winner)","archived":false,"fork":false,"pushed_at":"2024-09-29T02:02:29.000Z","size":5816,"stargazers_count":52,"open_issues_count":0,"forks_count":4,"subscribers_count":5,"default_branch":"main","last_synced_at":"2024-11-05T12:33:06.001Z","etag":null,"topics":["data-corruption","failure-prediction","label-noise","long-tailed-distribution","out-of-distribution-detection","uncertainty-estimation"],"latest_commit_sha":null,"homepage":"https://yutingli0606.github.io/SURE/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/YutingLi0606.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-01T11:12:24.000Z","updated_at":"2024-10-16T09:23:52.000Z","dependencies_parsed_at":"2024-03-14T11:33:40.200Z","dependency_job_id":"85d7bc83-d8c0-40ff-84b6-3a2110c76dae","html_url":"https://github.com/YutingLi0606/SURE","commit_stats":null,"previous_names":["yutingli0606/sure"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/YutingLi0606%2FSURE","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/YutingLi0606%2FSURE/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/YutingLi0606%2FSURE/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/YutingLi0606%2FSURE/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/YutingLi0606","download_url":"https://codeload.github.com/YutingLi0606/SURE/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247294172,"owners_count":20915332,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["data-corruption","failure-prediction","label-noise","long-tailed-distribution","out-of-distribution-detection","uncertainty-estimation"],"created_at":"2024-08-01T15:01:07.742Z","updated_at":"2025-04-05T05:30:59.629Z","avatar_url":"https://github.com/YutingLi0606.png","language":"Python","readme":"[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/sure-survey-recipes-for-building-reliable-and/learning-with-noisy-labels-on-animal)](https://paperswithcode.com/sota/learning-with-noisy-labels-on-animal?p=sure-survey-recipes-for-building-reliable-and) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/sure-survey-recipes-for-building-reliable-and/image-classification-on-food-101n-1)](https://paperswithcode.com/sota/image-classification-on-food-101n-1?p=sure-survey-recipes-for-building-reliable-and)\n\n\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/sure-survey-recipes-for-building-reliable-and/long-tail-learning-on-cifar-10-lt-r-50)](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-50?p=sure-survey-recipes-for-building-reliable-and)       [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/sure-survey-recipes-for-building-reliable-and/long-tail-learning-on-cifar-10-lt-r-10)](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-10?p=sure-survey-recipes-for-building-reliable-and) \n\n\n\n# 📝 SURE (CVPR 2024 \u0026 ECCV 2024 OOD-CV Challenge Winner)\n\n### Introduction\nThis is the official implementation of our CVPR 2024 paper *\"SURE: SUrvey REcipes for building reliable and robust deep networks\".* Our recipes are powerful tools in addressing real-world challenges, such as long-tailed classification, learning with noisy labels, data corruption and out-of-distribution detection. If you find this repo useful, please give it a star ⭐ and consider citing our paper. Thank you.\n\n\n[![arXiv](https://img.shields.io/badge/arXiv-2403.00543-red.svg)](https://openaccess.thecvf.com/content/CVPR2024/papers/Li_SURE_SUrvey_REcipes_for_building_reliable_and_robust_deep_networks_CVPR_2024_paper.pdf) \n[![Winner](https://img.shields.io/badge/Winner-ECCV%202024%20OOD--CV%20Challenge-yellow?style=flat)](https://www.ood-cv.org/challenge.html)\n\n\n[![Project Page](https://img.shields.io/badge/Project%20Page-blue?style=flat)](https://yutingli0606.github.io/SURE/)\n[![Google Drive](https://img.shields.io/badge/Google%20Drive-blue?style=flat)](https://drive.google.com/drive/folders/1xT-cX22_I8h5yAYT1WNJmhSLrQFZZ5t1?usp=sharing)\n[![Poster](https://img.shields.io/badge/Poster-blue?style=flat)](img/poster.pdf)\n\n### News\n- **2024.09.26 :**  🏆 🏆 🏆 Our work won the **First place** in [ECCV 2024 OOD-CV Challenge](https://www.ood-cv.org/challenge.html)! More details about our solution can be found in the [SSB-OSR](https://github.com/LIYangggggg/SSB-OSR) repository.\n- **2024.02.27 :** :rocket: :rocket: :rocket: Our paper has been accepted by CVPR 2024! \n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/Teaser.png\" width=\"1000px\" alt=\"teaser\"\u003e\n\u003c/p\u003e\n\n## Table of Content\n* [1. Overview of recipes](#1-overview-of-recipes)\n* [2. Visual Results](#2-visual-results)\n* [3. Installation](#3-installation)\n* [4. Quick Start](#4-quick-start)\n* [5. Citation](#5-citation)\n* [6. Acknowledgement](#6-acknowledgement)\n\n## 1. Overview of recipes\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/recipes.png\" width=\"1000px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\n## 2. Visual Results\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/confidence.png\" width=\"1000px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/ood.png\" width=\"650px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\n## 3. Installation\n\n### 3.1. Environment\n\n\nOur model can be learnt in a **single GPU RTX-4090 24G**\n\n```bash\nconda env create -f environment.yml\nconda activate u\n```\n\nThe code was tested on Python 3.9 and PyTorch 1.13.0.\n\n\n### 3.2. Datasets\n#### 3.2.1 CIFAR and Tiny-ImageNet\n* Using **CIFAR10, CIFAR100 and Tiny-ImageNet** for failure prediction (also known as misclassification detection).\n* We keep **10%** of training samples as a validation dataset for failure prediction. \n* Download datasets to ./data/ and split into train/val/test.\nTake CIFAR10 for an example:\n```\ncd data\nbash download_cifar.sh\n```\nThe structure of the file should be:\n```\n./data/CIFAR10/\n├── train\n├── val\n└── test\n```\n* We have already split Tiny-imagenet, you can download it from [here.](https://drive.google.com/drive/folders/1xT-cX22_I8h5yAYT1WNJmhSLrQFZZ5t1?usp=sharing)\n#### 3.2.2 ImageNet1k and ImageNet21k\n* Using **ImageNet1k and ImageNet21k** for detecting out-of-distribution samples.\n* For ImageNet, the ImageNet-1K classes (ILSVRC12 challenge) are used as Known, and specific classes from [ImageNet-21K-P](https://arxiv.org/abs/2104.10972) are selected as Unknown.\nMore details about dataset preparation, see [here](https://github.com/sgvaze/SSB/blob/main/DATA.md).\n#### 3.2.3 Animal-10N and Food-101N\n* Using **Animal-10N and Food-101N** for learning with noisy label.\n* To download Animal-10N dataset [[Song et al., 2019]](https://proceedings.mlr.press/v97/song19b/song19b.pdf), please refer to [here](https://dm.kaist.ac.kr/datasets/animal-10n/). The structure of the file should be:\n```\n./data/Animal10N/\n├── train\n└── test\n```\n* To download Food-101N dataset [[Lee et al., 2018]](https://arxiv.org/pdf/1711.07131.pdf), please refer to [here](https://kuanghuei.github.io/Food-101N/). The structure of the file should be:\n```\n./data/Food-101N/\n├── train\n└── test\n```\n#### 3.2.4 CIFAR-LT\n* Using **CIFAR-LT** with imbalance factor(10, 50, 100) for long-tailed classification.\n* Rename the original CIFAR10 and CIFAR100 (do not split into validation set) to 'CIFAR10_LT' and 'CIFAR100_LT' respectively.\n* The structure of the file should be:\n```\n./data/CIFAR10_LT/\n├── train\n└── test\n```\n#### 3.2.5 CIFAR10-C\n* Using **CIFAR10-C** to test robustness under data corrputions.\n* To download CIFAR10-C dataset [[Hendrycks et al., 2019]](https://arxiv.org/pdf/1903.12261.pdf), please refer to [here](https://github.com/hendrycks/robustness?tab=readme-ov-file). The structure of the file should be:\n```\n./data/CIFAR-10-C/\n├── brightness.npy\n├── contrast.npy\n├── defocus_blur.npy\n...\n```\n\n#### 3.2.6 Stanford CARS\n* We additionally run experiments on **Stanford CARS**, which contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images\n* To download  the dataset, please refer to [here](http://ai.stanford.edu/~jkrause/cars/car_dataset.html). The structure of the file should be:\n```\n./data/CARS/\n├── train\n└── test \n...\n```\n\n## 4. Quick Start\n* Our model checkpoints are saved [here.](https://drive.google.com/drive/folders/1xT-cX22_I8h5yAYT1WNJmhSLrQFZZ5t1?usp=sharing)\n* All results are saved in test_results.csv.\n### 4.1 Failure Prediction\n* We provide convenient and comprehensive commands in ./run/ to train and test different backbones across different datasets to help researchers reproducing the results of the paper.\n\n\u003cdetails\u003e\n\u003csummary\u003e\nTake a example in run/CIFAR10/wideresnet.sh:\n\n\u003c/summary\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    MSP\n   \u003c/summary\u003e\n    \n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n  \u003c/details\u003e\n\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    RegMixup\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0.5 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0.5 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    CRL\n   \u003c/summary\u003e\n    \n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name baseline \\\n      --crl-weight 0.5 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name baseline \\\n      --crl-weight 0.5 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    SAM\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name sam \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name sam \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    SWA\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name swa \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name swa \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    FMFP\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    SURE\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name fmfp \\\n      --crl-weight 0.5 \\\n      --mixup-weight 0.5 \\\n      --mixup-beta 10 \\\n      --use-cosine \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name wrn \\\n      --optim-name fmfp \\\n      --crl-weight 0.5 \\\n      --mixup-weight 0.5 \\\n      --use-cosine \\\n      --save-dir ./CIFAR10_out/wrn_out \\\n      Cifar10\n\n  \u003c/details\u003e\n\u003c/details\u003e\n\nNote that : \n* Official **DeiT-B** can be downloaded from [here](https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth)\n\n* Official **DeiT-B-Distilled** can be downloaded from [here](https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_224-df68dfff.pth)\n\n* Then one should set `--deit-path` argument.\n\u003cdetails\u003e\n\u003csummary\u003e\nTake a example in run/CIFAR10/deit.sh:\n\n\u003c/summary\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    MSP\n   \u003c/summary\u003e\n    \n      python3 main.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --epochs 50 \\\n      --lr 0.01 \\\n      --weight-decay 5e-5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n  \u003c/details\u003e\n\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    RegMixup\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --epochs 50 \\\n      --lr 0.01 \\\n      --weight-decay 5e-5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0.2 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name baseline \\\n      --crl-weight 0 \\\n      --mixup-weight 0.2 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    CRL\n   \u003c/summary\u003e\n    \n\n\n     python3 main.py \\\n     --batch-size 64 \\\n     --gpu 5 \\\n     --epochs 50 \\\n     --lr 0.01 \\\n     --weight-decay 5e-5 \\\n     --nb-run 3 \\\n     --model-name deit \\\n     --optim-name baseline \\\n     --crl-weight 0.2 \\\n     --mixup-weight 0 \\\n     --mixup-beta 10 \\\n     --save-dir ./CIFAR10_out/deit_out \\\n     Cifar10\n     \n     python3 test.py \\\n     --batch-size 64 \\\n     --gpu 5 \\\n     --nb-run 3 \\\n     --model-name deit \\\n     --optim-name baseline \\\n     --crl-weight 0.2 \\\n     --mixup-weight 0 \\\n     --save-dir ./CIFAR10_out/deit_out \\\n     Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    SAM\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --epochs 50 \\\n      --lr 0.01 \\\n      --weight-decay 5e-5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name sam \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name sam \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    SWA\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --epochs 50 \\\n      --lr 0.01 \\\n      --weight-decay 5e-5 \\\n      --swa-epoch-start 0 \\\n      --swa-lr 0.004 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name swa \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name swa \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    FMFP\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --epochs 50 \\\n      --lr 0.01 \\\n      --weight-decay 5e-5 \\\n      --swa-epoch-start 0 \\\n      --swa-lr 0.004 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 0 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    SURE\n   \u003c/summary\u003e\n    \n\n      python3 main.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --epochs 50 \\\n      --lr 0.01 \\\n      --weight-decay 5e-5 \\\n      --swa-epoch-start 0 \\\n      --swa-lr 0.004 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 0.2 \\\n      --mixup-beta 10 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n      \n      python3 test.py \\\n      --batch-size 64 \\\n      --gpu 5 \\\n      --nb-run 3 \\\n      --model-name deit \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 0.2 \\\n      --save-dir ./CIFAR10_out/deit_out \\\n      Cifar10\n  \u003c/details\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\nThe results of failure prediction.\n\u003c/summary\u003e\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/main_results.jpeg\" width=\"1000px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n### 4.2 Long-tailed classification\n* We provide convenient and comprehensive commands in ./run/CIFAR10_LT and ./run/CIFAR100_LT to train and test our method under long-tailed distribution.\n\n\u003cdetails\u003e\n\u003csummary\u003e\nTake a example in run/CIFAR10_LT/resnet32.sh:\n\n\u003c/summary\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    Imbalance factor=10\n   \u003c/summary\u003e\n    \n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name resnet32 \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 1 \\\n      --mixup-beta 10 \\\n      --use-cosine \\\n      --save-dir ./CIFAR10_LT/res32_out \\\n      Cifar10_LT\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name resnet32 \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 1 \\\n      --use-cosine \\\n      --save-dir ./CIFAR10_LT/res32_out \\\n      Cifar10_LT\n  \u003c/details\u003e\n\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    Imbalance factor = 50\n   \u003c/summary\u003e\n    \n      python3 main.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --epochs 200 \\\n      --nb-run 3 \\\n      --model-name resnet32 \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 1 \\\n      --mixup-beta 10 \\\n      --use-cosine \\\n      --save-dir ./CIFAR10_LT_50/res32_out \\\n      Cifar10_LT_50\n      \n      python3 test.py \\\n      --batch-size 128 \\\n      --gpu 0 \\\n      --nb-run 3 \\\n      --model-name resnet32 \\\n      --optim-name fmfp \\\n      --crl-weight 0 \\\n      --mixup-weight 1 \\\n      --use-cosine \\\n      --save-dir ./CIFAR10_LT_50/res32_out \\\n      Cifar10_LT_50\n      \n  \u003c/details\u003e\n  \n  \u003cdetails\u003e\n   \u003csummary\u003e\n    Imbalance factor = 100\n   \u003c/summary\u003e\n   \n    python3 main.py \\\n    --batch-size 128 \\\n    --gpu 0 \\\n    --epochs 200 \\\n    --nb-run 3 \\\n    --model-name resnet32 \\\n    --optim-name fmfp \\\n    --crl-weight 0 \\\n    --mixup-weight 1 \\\n    --mixup-beta 10 \\\n    --use-cosine \\\n    --save-dir ./CIFAR10_LT_100/res32_out \\\n    Cifar10_LT_100\n    \n    python3 test.py \\\n    --batch-size 128 \\\n    --gpu 0 \\\n    --nb-run 3 \\\n    --model-name resnet32 \\\n    --optim-name fmfp \\\n    --crl-weight 0 \\\n    --mixup-weight 1 \\\n    --use-cosine \\\n    --save-dir ./CIFAR10_LT_100/res32_out \\\n    Cifar10_LT_100\n  \u003c/details\u003e\n\u003c/details\u003e\n\nYou can conduct second stage uncertainty-aware re-weighting by :\n```\npython3 finetune.py \\\n--batch-size 128 \\\n--gpu 5 \\\n--nb-run 1 \\\n--model-name resnet32 \\\n--optim-name fmfp \\\n--fine-tune-lr 0.005 \\\n--reweighting-type exp \\\n--t 1 \\\n--crl-weight 0 \\\n--mixup-weight 1 \\\n--mixup-beta 10 \\\n--fine-tune-epochs 50 \\\n--use-cosine \\\n--save-dir ./CIFAR100LT_100_out/51.60 \\\nCifar100_LT_100\n```\n\n\u003cdetails\u003e\n\u003csummary\u003e\nThe results of long-tailed classification.\n\u003c/summary\u003e\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/long-tail.jpeg\" width=\"600px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\u003c/details\u003e\n\n### 4.3 Learning with noisy labels\n* We provide convenient and comprehensive commands in ./run/animal10N and ./run/Food101N to train and test our method with noisy labels.\n\n\u003cdetails\u003e\n   \n   \u003csummary\u003e\n    Animal-10N\n   \u003c/summary\u003e  \n   \n     python3 main.py \\\n     --batch-size 128 \\\n     --gpu 0 \\\n     --epochs 200 \\\n     --nb-run 1 \\\n     --model-name vgg19bn \\\n     --optim-name fmfp \\\n     --crl-weight 0.2 \\\n     --mixup-weight 1 \\\n     --mixup-beta 10 \\\n     --use-cosine \\\n     --save-dir ./Animal10N_out/vgg19bn_out \\\n     Animal10N\n     \n     python3 test.py \\\n     --batch-size 128 \\\n     --gpu 0 \\\n     --nb-run 1 \\\n     --model-name vgg19bn \\\n     --optim-name baseline \\\n     --crl-weight 0.2 \\\n     --mixup-weight 1 \\\n     --use-cosine \\\n     --save-dir ./Animal10N_out/vgg19bn_out \\\n     Animal10N\n\n  \u003c/details\u003e\n  \u003cdetails\u003e\n   \u003csummary\u003e\n    Food-101N\n   \u003c/summary\u003e\n    \n\n     python3 main.py \\\n     --batch-size 64 \\\n     --gpu 0 \\\n     --epochs 30 \\\n     --nb-run 1 \\\n     --model-name resnet50 \\\n     --optim-name fmfp \\\n     --crl-weight 0.2 \\\n     --mixup-weight 1 \\\n     --mixup-beta 10 \\\n     --lr 0.01 \\\n     --swa-lr 0.005 \\\n     --swa-epoch-start 22 \\\n     --use-cosine True \\\n     --save-dir ./Food101N_out/resnet50_out \\\n     Food101N\n     \n     python3 test.py \\\n     --batch-size 64 \\\n     --gpu 0 \\\n     --nb-run 1 \\\n     --model-name resnet50 \\\n     --optim-name fmfp \\\n     --crl-weight 0.2 \\\n     --mixup-weight 1 \\\n     --use-cosine True \\\n     --save-dir ./Food101N_out/resnet50_out \\\n     Food101N\n\n  \u003c/details\u003e\n\n  \n\u003cdetails\u003e\n\u003csummary\u003e\nThe results of learning with noisy labels.\n\u003c/summary\u003e\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/label-noise.jpeg\" width=\"600px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\u003c/details\u003e \n\n\n### 4.4 Robustness under data corruption\n* You can test on CIFAR10-C by the following code in test.py:\n```\nif args.data_name == 'cifar10':\n    cor_results_storage = test_cifar10c_corruptions(net, args.corruption_dir, transform_test,\n                                                    args.batch_size, metrics, logger)\n    cor_results = {corruption: {\n                   severity: {\n                   metric: cor_results_storage[corruption][severity][metric][0] for metric in metrics} for severity\n                   in range(1, 6)} for corruption in data.CIFAR10C.CIFAR10C.cifarc_subsets}\n    cor_results_all_models[f\"model_{r + 1}\"] = cor_results\n``` \n* The results are saved in cifar10c_results.csv.\n* Testing on CIFAR10-C takes a while. If you don't need the results, just comment out this code.\n\n\u003cdetails\u003e\n\u003csummary\u003e\nThe results of failure prediction under distribution shift.\n\u003c/summary\u003e\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/data-corruption.jpeg\" width=\"1000px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\u003c/details\u003e\n\n### 4.5 Out-of-distribution detection\n* You can test on ImageNet by [SSB-OSR](https://github.com/LIYangggggg/SSB-OSR).\n\n\u003cdetails\u003e\n\u003csummary\u003e\nThe results of out-of-distribution detection.\n\u003c/summary\u003e\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"img/ood_results.png\" width=\"800px\" alt=\"method\"\u003e\n\u003c/p\u003e\n\u003c/details\u003e\n\n## 5. Citation\nIf our project is helpful for your research, please consider citing :\n```\n@InProceedings{Li_2024_CVPR,\n    author    = {Li, Yuting and Chen, Yingyi and Yu, Xuanlong and Chen, Dexiong and Shen, Xi},\n    title     = {SURE: SUrvey REcipes for building reliable and robust deep networks},\n    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n    month     = {June},\n    year      = {2024},\n    pages     = {17500-17510}\n}\n\n@article{Li2024sureood,\n    author    = {Li, Yang and Sha, Youyang and Wu, Shengliang and Li, Yuting and Yu, Xuanlong and Huang, Shihua and Cun, Xiaodong and Chen,Yingyi and Chen, Dexiong and Shen, Xi},\n    title     = {SURE-OOD: Detecting OOD samples with SURE},\n    month     = {September}\n    year      = {2024},\n}\n```\n\n\n\n## 6. Acknowledgement\nWe refer to codes from [FMFP](https://github.com/Impression2805/FMFP) and [OpenMix](https://github.com/Impression2805/OpenMix). Thanks for their awesome works.\n\n\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FYutingLi0606%2FSURE","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FYutingLi0606%2FSURE","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FYutingLi0606%2FSURE/lists"}