{"id":13708518,"url":"https://github.com/google-research/augmix","last_synced_at":"2025-05-15T02:05:05.582Z","repository":{"id":45738675,"uuid":"226020168","full_name":"google-research/augmix","owner":"google-research","description":"AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty","archived":false,"fork":false,"pushed_at":"2025-03-21T22:51:06.000Z","size":1395,"stargazers_count":985,"open_issues_count":6,"forks_count":157,"subscribers_count":28,"default_branch":"master","last_synced_at":"2025-04-09T04:00:49.259Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-research.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-12-05T05:03:35.000Z","updated_at":"2025-04-04T02:15:47.000Z","dependencies_parsed_at":"2022-08-29T02:30:55.778Z","dependency_job_id":"a469172b-57bf-46aa-9b45-4c517f4ce2d1","html_url":"https://github.com/google-research/augmix","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-research%2Faugmix","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-research%2Faugmix/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-research%2Faugmix/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-research%2Faugmix/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-research","download_url":"https://codeload.github.com/google-research/augmix/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254259369,"owners_count":22040819,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T23:00:24.776Z","updated_at":"2025-05-15T02:05:05.551Z","avatar_url":"https://github.com/google-research.png","language":"Python","readme":"# AugMix\n\n\u003cimg align=\"center\" src=\"assets/augmix.gif\" width=\"750\"\u003e\n\n## Introduction\n\nWe propose AugMix, a data processing technique that mixes augmented images and\nenforces consistent embeddings of the augmented images, which results in\nincreased robustness and improved uncertainty calibration. AugMix does not\nrequire tuning to work correctly, as with random cropping or CutOut, and thus\nenables plug-and-play data augmentation. AugMix significantly improves\nrobustness and uncertainty measures on challenging image classification\nbenchmarks, closing the gap between previous methods and the best possible\nperformance by more than half in some cases. With AugMix, we obtain\nstate-of-the-art on ImageNet-C, ImageNet-P and in uncertainty estimation when\nthe train and test distribution do not match.\n\nFor more details please see our [ICLR 2020 paper](https://arxiv.org/pdf/1912.02781.pdf).\n\n## Pseudocode\n\n\u003cimg align=\"center\" src=\"assets/pseudocode.png\" width=\"750\"\u003e\n\n## Contents\n\nThis directory includes a reference implementation in NumPy of the augmentation\nmethod used in AugMix in `augment_and_mix.py`. The full AugMix method also adds\na Jensen-Shanon Divergence consistency loss to enforce consistent predictions\nbetween two different augmentations of the input image and the clean image\nitself.\n\nWe also include PyTorch re-implementations of AugMix on both CIFAR-10/100 and\nImageNet in `cifar.py` and `imagenet.py` respectively, which both support\ntraining and evaluation on CIFAR-10/100-C and ImageNet-C.\n\n## Requirements\n\n*   numpy\u003e=1.15.0\n*   Pillow\u003e=6.1.0\n*   torch==1.2.0\n*   torchvision==0.2.2\n\n## Setup\n\n1.  Install PyTorch and other required python libraries with:\n\n    ```\n    pip install -r requirements.txt\n    ```\n\n2.  Download CIFAR-10-C and CIFAR-100-C datasets with:\n\n    ```\n    mkdir -p ./data/cifar\n    curl -O https://zenodo.org/record/2535967/files/CIFAR-10-C.tar\n    curl -O https://zenodo.org/record/3555552/files/CIFAR-100-C.tar\n    tar -xvf CIFAR-100-C.tar -C data/cifar/\n    tar -xvf CIFAR-10-C.tar -C data/cifar/\n    ```\n\n3.  Download ImageNet-C with:\n\n    ```\n    mkdir -p ./data/imagenet/imagenet-c\n    curl -O https://zenodo.org/record/2235448/files/blur.tar\n    curl -O https://zenodo.org/record/2235448/files/digital.tar\n    curl -O https://zenodo.org/record/2235448/files/noise.tar\n    curl -O https://zenodo.org/record/2235448/files/weather.tar\n    tar -xvf blur.tar -C data/imagenet/imagenet-c\n    tar -xvf digital.tar -C data/imagenet/imagenet-c\n    tar -xvf noise.tar -C data/imagenet/imagenet-c\n    tar -xvf weather.tar -C data/imagenet/imagenet-c\n    ```\n\n## Usage\n\nThe Jensen-Shannon Divergence loss term may be disabled for faster training at the cost of slightly lower performance by adding the flag `--no-jsd`.\n\nTraining recipes used in our paper:\n\nWRN: `python cifar.py`\n\nAllConv: `python cifar.py -m allconv`\n\nResNeXt: `python cifar.py -m resnext -e 200`\n\nDenseNet: `python cifar.py -m densenet -e 200 -wd 0.0001`\n\nResNet-50: `python imagenet.py \u003cpath/to/imagenet\u003e \u003cpath/to/imagenet-c\u003e`\n\n## Pretrained weights\n\nWeights for a ResNet-50 ImageNet classifier trained with AugMix for 180 epochs are available\n[here](https://drive.google.com/file/d/1z-1V3rdFiwqSECz7Wkmn4VJVefJGJGiF/view?usp=sharing).\n\nThis model has a 65.3 mean Corruption Error (mCE) and a 77.53% top-1 accuracy on clean ImageNet data.\n\n## Citation\n\nIf you find this useful for your work, please consider citing\n\n```\n@article{hendrycks2020augmix,\n  title={{AugMix}: A Simple Data Processing Method to Improve Robustness and Uncertainty},\n  author={Hendrycks, Dan and Mu, Norman and Cubuk, Ekin D. and Zoph, Barret and Gilmer, Justin and Lakshminarayanan, Balaji},\n  journal={Proceedings of the International Conference on Learning Representations (ICLR)},\n  year={2020}\n}\n```\n","funding_links":[],"categories":["Table of Contents","Python","DeepCNN"],"sub_categories":["Sample Mixup Policies in SL","Training"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-research%2Faugmix","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogle-research%2Faugmix","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-research%2Faugmix/lists"}