{"id":28518410,"url":"https://github.com/sharpiless/rgal","last_synced_at":"2025-07-05T05:31:52.092Z","repository":{"id":269005491,"uuid":"906136082","full_name":"Sharpiless/RGAL","owner":"Sharpiless","description":"official code for our IJCV paper \"Relation-Guided Adversarial Learning for Data-Free Knowledge Transfer\"","archived":false,"fork":false,"pushed_at":"2024-12-27T02:23:08.000Z","size":17146,"stargazers_count":10,"open_issues_count":0,"forks_count":2,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-06-09T05:44:29.040Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Sharpiless.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-12-20T08:38:53.000Z","updated_at":"2025-03-20T08:50:53.000Z","dependencies_parsed_at":"2024-12-20T09:39:38.006Z","dependency_job_id":null,"html_url":"https://github.com/Sharpiless/RGAL","commit_stats":null,"previous_names":["sharpiless/rgal"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Sharpiless/RGAL","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FRGAL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FRGAL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FRGAL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FRGAL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Sharpiless","download_url":"https://codeload.github.com/Sharpiless/RGAL/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FRGAL/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":263689365,"owners_count":23496496,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-06-09T05:37:09.489Z","updated_at":"2025-07-05T05:31:52.087Z","avatar_url":"https://github.com/Sharpiless.png","language":"Python","readme":"## News\r\n* `2024/12/20` We release the code for the *data-free knowledge distillation* tasks.\r\n\r\n# RGAL\r\n\r\nThis is a PyTorch implementation of the following paper:\r\n\r\n**Relation-Guided Adversarial Learning for Data-Free Knowledge Transfer**, IJCV 2024.\r\n\r\nYingping Liang and Ying Fu\r\n\r\n[Paper](https://link.springer.com/article/10.1007/s11263-024-02303-4)\r\n\r\n\u003cimg src=\"misc/framework.png\" width=\"100%\" \u003e\r\n\r\n**Abstract**: *Data-free knowledge distillation transfers knowledge by recovering training data from a pre-trained model. Despite the recent success of seeking global data diversity, the diversity within each class and the similarity among different classes are largely overlooked, resulting in data homogeneity and limited performance. In this paper, we introduce a novel Relation-Guided Adversarial Learning method with triplet losses, which solves the homogeneity problem from two aspects. To be specific, our method aims to promote both intra-class diversity and inter-class confusion of the generated samples. To this end, we design two phases, an image synthesis phase and a student training phase. In the image synthesis phase, we construct an optimization process to push away samples with the same labels and pull close samples with different labels, leading to intra-class diversity and inter-class confusion, respectively. Then, in the student training phase, we perform an opposite optimization, which adversarially attempts to reduce the distance of samples of the same classes and enlarge the distance of samples of different classes. To mitigate the conflict of seeking high global diversity and keeping inter-class confusing, we propose a focal weighted sampling strategy by selecting the negative in the triplets unevenly within a finite range of distance. RGAL shows significant improvement over previous state-of-the-art methods in accuracy and data efficiency. Besides, RGAL can be inserted into state-of-the-art methods on various data-free knowledge transfer applications. Experiments on various benchmarks demonstrate the effectiveness and generalizability of our proposed method on various tasks, specially data-free knowledge distillation, data-free quantization, and non-exemplar incremental learning.*\r\n\r\n\r\n\r\n\r\nhttps://github.com/user-attachments/assets/eb78306f-1fbe-465a-9996-7315716f0b55\r\n\r\n\r\n\r\n\r\n\r\n## Instillation\r\n\r\n```\r\nconda create -n rgal python=3.9\r\npip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113\r\npip install scipy tqdm pillow kornia \r\n```\r\n\r\n## Run\r\n\r\nThe dataset (CIFAR-10/-100) will be downloaded automatically when running.\r\n\r\nWe provide a running script:\r\n```\r\npython main.py \\\r\n--epochs 200 \\\r\n--dataset cifar10 \\\r\n--batch_size 128 \\\r\n--synthesis_batch_size 256 \\\r\n--teacher wrn40_2 \\\r\n--student wrn16_1 \\\r\n--lr 0.1 \\\r\n--kd_steps 400 \\\r\n--ep_steps 400 \\\r\n--g_steps 400 \\\r\n--lr_g 1e-3 \\\r\n--adv 1.0 \\\r\n--bn 1.0 \\\r\n--oh 1.0 \\\r\n--act 0.001 \\\r\n--gpu 0 \\\r\n--seed 0 \\\r\n--T 20 \\\r\n--save_dir run/scratch1 \\\r\n--log_tag scratch1 \\\r\n--cd_loss 0.1 \\\r\n--gram_loss 0 \\\r\n--teacher_weights best_model/cifar10_wrn40_2.pth \\\r\n--custom_steps 1.0 \\\r\n--print_freq 50 \\\r\n--triplet_target student \\\r\n--pair_sample \\\r\n--striplet_feature global \\\r\n--start_layer 2 \\\r\n--triplet 0.1 \\\r\n--striplet 0.1 \\\r\n--balanced_sampling \\\r\n--balance 0.1\r\n```\r\n\r\nwhere \"--triplet\" and \"--striplet\" indicates the loss weights of our proposed in the data generation stage and distillation stage, separately.\r\n\r\nTo running our method on different teacher and student models, modify \"--teacher\" and \"--student wrn16_1\"\r\n\r\n\"--balanced_sampling\" indicates the paired sampling strategy as in our paper.\r\n\r\nPretrained checkpoints for examples are available at (best_model)[https://github.com/Sharpiless/RGAL/tree/main/best_model].\r\n\r\n![image](https://github.com/user-attachments/assets/3c8b7698-7f11-430c-ac6d-d7d0b4a22a7f)\r\n\r\n\r\n## Visualization\r\n\r\nPlease refer to (ZSKT)[https://github.com/polo5/ZeroShotKnowledgeTransfer].\r\n\r\n## License and Citation\r\nThis repository can only be used for personal/research/non-commercial purposes.\r\nPlease cite the following paper if this model helps your research:\r\n\r\n```\r\n@article{liang2024relation,\r\n  title={Relation-Guided Adversarial Learning for Data-Free Knowledge Transfer},\r\n  author={Liang, Yingping and Fu, Ying},\r\n  journal={International Journal of Computer Vision},\r\n  pages={1--18},\r\n  year={2024},\r\n  publisher={Springer}\r\n}\r\n```\r\n\r\n## Acknowledgments\r\n* The code for inference and training is heavily borrowed from [CMI](https://github.com/zju-vipa/CMI), we thank the author for their great effort.\r\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsharpiless%2Frgal","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsharpiless%2Frgal","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsharpiless%2Frgal/lists"}