{"id":13709314,"url":"https://github.com/amazon-science/mix-generation","last_synced_at":"2025-07-03T03:06:13.149Z","repository":{"id":42109475,"uuid":"508979024","full_name":"amazon-science/mix-generation","owner":"amazon-science","description":"MixGen: A New Multi-Modal Data Augmentation","archived":false,"fork":false,"pushed_at":"2023-01-09T21:46:27.000Z","size":9527,"stargazers_count":123,"open_issues_count":2,"forks_count":7,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-06-26T11:04:15.933Z","etag":null,"topics":["data-augmentation","data-efficiency","multimodal","pretraining","vision-language"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/amazon-science.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2022-06-30T07:33:41.000Z","updated_at":"2025-06-11T11:27:34.000Z","dependencies_parsed_at":"2023-02-08T14:31:11.432Z","dependency_job_id":null,"html_url":"https://github.com/amazon-science/mix-generation","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/amazon-science/mix-generation","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fmix-generation","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fmix-generation/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fmix-generation/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fmix-generation/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/amazon-science","download_url":"https://codeload.github.com/amazon-science/mix-generation/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fmix-generation/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":263215284,"owners_count":23431892,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["data-augmentation","data-efficiency","multimodal","pretraining","vision-language"],"created_at":"2024-08-02T23:00:37.964Z","updated_at":"2025-07-03T03:06:13.099Z","avatar_url":"https://github.com/amazon-science.png","language":"Python","readme":"## MixGen: A New Multi-Modal Data Augmentation\n\nThis is the official PyTorch implementation of [MixGen](https://arxiv.org/abs/2206.08358), which is a joint data augmentation technique for vision-language representation learning to improve data efficiency.\n\n\u003cimg src=\"examples/mixgen.png\" width=\"600\"\u003e\n\nHere are some image-text pairs generated by MixGen,\n\n\u003cimg src=\"examples/mixgen_sample.png\" width=\"600\"\u003e\n\n\n## How to use\n\nMixGen is an input-level data augmentation technique, which can be plugged-and-played into existing vision-language learning methods with minimal code change.\n\nHere we adopt [ALBEF, NeurIPS'21](https://arxiv.org/abs/2107.07651) as an illustrating example. We only need to add one line between dataloader and model forward [here](https://github.com/salesforce/ALBEF/blob/main/Pretrain.py#L54).\n\nThat is, change from\n\n```\nfor i, (image, text) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):\n    optimizer.zero_grad()\n```\n\nto\n```\nimport mixgen as mg\nfor i, (image, text) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):\n    image, text = mg.mixgen(image, text, num=16)\n    optimizer.zero_grad()\n```\n\nAnd that's it!!! No more changes needed to be made. You can simply kicoff training just like ALBEF does,\n\n\n```\npython -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain.py\n```\n\n## Citation\n\nIf you find MixGen useful in your research, please kindly consider to cite the following paper.\n```\n@InProceedings{Hao_2023_WACV,\n    author    = {Hao, Xiaoshuai and Zhu, Yi and Appalaraju, Srikar and Zhang, Aston and Zhang, Wanqian and Li, Bo and Li, Mu},\n    title     = {MixGen: A New Multi-Modal Data Augmentation},\n    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},\n    month     = {January},\n    year      = {2023},\n    pages     = {379-389}\n}\n```\n\n\n## Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n\n## License\n\nThis project is licensed under the Apache-2.0 License.\n\n","funding_links":[],"categories":["Mixup for Multi-modality"],"sub_categories":["Label Mixup Methods"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famazon-science%2Fmix-generation","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Famazon-science%2Fmix-generation","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famazon-science%2Fmix-generation/lists"}