{"id":14286144,"url":"https://github.com/JunlinHan/YOCO","last_synced_at":"2025-08-15T07:31:07.109Z","repository":{"id":185276186,"uuid":"452704480","full_name":"JunlinHan/YOCO","owner":"JunlinHan","description":"Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut, ICML 2022.","archived":false,"fork":false,"pushed_at":"2023-08-01T09:05:26.000Z","size":5555,"stargazers_count":104,"open_issues_count":1,"forks_count":10,"subscribers_count":4,"default_branch":"main","last_synced_at":"2024-12-16T02:34:22.008Z","etag":null,"topics":["cifar10","cifar100","classification","computer-vision","contrastive-learning","data-augmentation","data-augmentation-strategies","data-augmentations","icml","imagenet","imagenet-classifier","instance-segmentation","object-detection","pytorch","rain-removal","super-resolution"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/JunlinHan.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-01-27T13:59:57.000Z","updated_at":"2024-12-15T03:40:37.000Z","dependencies_parsed_at":null,"dependency_job_id":"d6e003f8-d1c4-4fd5-bed3-4a2e1330e085","html_url":"https://github.com/JunlinHan/YOCO","commit_stats":null,"previous_names":["junlinhan/yoco"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/JunlinHan/YOCO","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JunlinHan%2FYOCO","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JunlinHan%2FYOCO/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JunlinHan%2FYOCO/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JunlinHan%2FYOCO/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/JunlinHan","download_url":"https://codeload.github.com/JunlinHan/YOCO/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JunlinHan%2FYOCO/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":270539493,"owners_count":24603182,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-15T02:00:12.559Z","response_time":110,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cifar10","cifar100","classification","computer-vision","contrastive-learning","data-augmentation","data-augmentation-strategies","data-augmentations","icml","imagenet","imagenet-classifier","instance-segmentation","object-detection","pytorch","rain-removal","super-resolution"],"created_at":"2024-08-23T17:00:41.186Z","updated_at":"2025-08-15T07:31:07.099Z","avatar_url":"https://github.com/JunlinHan.png","language":"Python","readme":"# You Only Cut Once (YOCO)\n\nYOCO is a simple method/strategy of performing augmentations, which enjoys the properties of parameter-free, easy usage, and boosting almost all augmentations for free (negligible computation \u0026 memory cost). \n\n[You Only Cut Once: Boosting Data Augmentation with a Single Cut](https://arxiv.org/abs/2201.12078)\u003cbr\u003e\n[Junlin Han](https://junlinhan.github.io/), Pengfei Fang, Weihao Li, Jie Hong, Ali Armin, [Ian Reid](https://cs.adelaide.edu.au/~ianr/), [Lars Petersson](https://people.csiro.au/P/L/Lars-Petersson), [Hongdong Li](http://users.cecs.anu.edu.au/~hongdong/)\u003cbr\u003e\nDATA61-CSIRO and Australian National University and University of Adelaide\u003cbr\u003e\nInternational Conference on Machine Learning (ICML), 2022\n\n```\n@inproceedings{han2022yoco,\n  title={You Only Cut Once: Boosting Data Augmentation with a Single Cut},\n  author={Junlin Han and Pengfei Fang and Weihao Li and Jie Hong and Mohammad Ali Armin and and Ian Reid and Lars Petersson and Hongdong Li},\n  booktitle={International Conference on Machine Learning (ICML)},\n  year={2022}\n}\n```\nYOCO cuts one image into two equal pieces, either in the height or the width dimension. The same data augmentations are performed independently within each piece. Augmented pieces are then concatenated together to form the final augmented image.\n\u003cimg src='imgs/aug_overview.png' align=\"middle\" width=800\u003e\n　  \n   \n## Results\n\nOverall, YOCO benefits almost all augmentations in multiple vision tasks (classification, contrastive learning, object detection, instance segmentation, image deraining, image super-resolution). Please see our paper for more. \n\n## Easy usages\nApplying YOCO is quite easy, here is a demo code of performing YOCO at the batch level. \n```\n***\nimages: images to be augmented, here is tensor with (b,c,h,w) shape\naug: composed augmentation operations, we use horizontal flip here\nh: height of images\nw: width of images\n***\n\ndef YOCO(images, aug, h, w):\n    images = torch.cat((aug(images[:, :, :, 0:int(w/2)]), aug(images[:, :, :, int(w/2):w])), dim=3) if \\\n    torch.rand(1) \u003e 0.5 else torch.cat((aug(images[:, :, 0:int(h/2), :]), aug(images[:, :, int(h/2):h, :])), dim=2)\n    return images\n    \nfor i, (images, target) in enumerate(train_loader):    \n    aug = torch.nn.Sequential(\n      transforms.RandomHorizontalFlip(), )\n    _, _, h, w = images.shape\n    # perform augmentations with YOCO\n    images = YOCO(images, aug, h, w) \n```\nYou may use any [pytorch inbuilt augmentation operations](https://pytorch.org/vision/stable/transforms.html) to replace the horizontal flip operation. \n\n## Prerequisites\n\nThis repo aims to be minimal modifications on [official PyTorch ImageNet training code](https://github.com/pytorch/examples/tree/master/imagenet) and [MoCo](https://github.com/facebookresearch/moco). Following their instructions to install the environments and prepare the datasets.\n\n[timm](https://github.com/rwightman/pytorch-image-models) is also required for ImageNet classification, simply run\n\n```\npip install timm\n```\n## Images augmented with YOCO\nFor each quadruplet, we show the original input image, augmented image from image-level augmentation, and two images from different cut dimensions produced by YOCO.\n\u003cimg src='imgs/visu.png' align=\"middle\" width=800\u003e\n\n\n## Contact\njunlinhcv@gmail.com\n\nIf you tried YOCO in other tasks/datasets/augmentations, please feel free to let me know the results. They will be collected and presented in this repo, regardless of positive or negative. Many thanks!\n\n## Acknowledgments\nOur code is developed based on [official PyTorch ImageNet training code](https://github.com/pytorch/examples/tree/master/imagenet) and [MoCo](https://github.com/facebookresearch/moco). We thank anonymous reviewers for their invaluable feedback!\n\n\n","funding_links":[],"categories":["Table of Contents"],"sub_categories":["Sample Mixup Policies in SL"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJunlinHan%2FYOCO","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FJunlinHan%2FYOCO","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJunlinHan%2FYOCO/lists"}