{"id":8079423,"url":"https://github.com/aimagelab/dress-code","last_synced_at":"2025-05-15T18:08:04.874Z","repository":{"id":40895526,"uuid":"397846434","full_name":"aimagelab/dress-code","owner":"aimagelab","description":"Dress Code: High-Resolution Multi-Category Virtual Try-On. ECCV 2022","archived":false,"fork":false,"pushed_at":"2024-12-12T10:00:06.000Z","size":17537,"stargazers_count":550,"open_issues_count":10,"forks_count":64,"subscribers_count":17,"default_branch":"main","last_synced_at":"2025-03-31T23:33:42.712Z","etag":null,"topics":["artificial-intelligence","computer-vision","deep-learning","dress-code","eccv2022","virtual-try-on"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/aimagelab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-08-19T06:53:48.000Z","updated_at":"2025-03-27T03:45:34.000Z","dependencies_parsed_at":"2024-08-10T07:30:58.256Z","dependency_job_id":"ce44ca2e-d0d9-46eb-9c18-79846b772416","html_url":"https://github.com/aimagelab/dress-code","commit_stats":{"total_commits":24,"total_committers":2,"mean_commits":12.0,"dds":"0.33333333333333337","last_synced_commit":"29a5a731a948694dfacf1365bc1678c0093709b1"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aimagelab%2Fdress-code","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aimagelab%2Fdress-code/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aimagelab%2Fdress-code/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aimagelab%2Fdress-code/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/aimagelab","download_url":"https://codeload.github.com/aimagelab/dress-code/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247755557,"owners_count":20990620,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","computer-vision","deep-learning","dress-code","eccv2022","virtual-try-on"],"created_at":"2024-04-17T02:11:09.637Z","updated_at":"2025-04-08T00:35:30.915Z","avatar_url":"https://github.com/aimagelab.png","language":"Python","readme":"# Dress Code Dataset\n\nThis repository presents the virtual try-on dataset proposed in:\n\n*D. Morelli, M. Fincato, M. Cornia, F. Landi, F. Cesari, R. Cucchiara* \u003c/br\u003e\n**Dress Code: High-Resolution Multi-Category Virtual Try-On** \u003c/br\u003e\n\n**[[Paper](https://arxiv.org/abs/2204.08532)]** **[[Dataset Request Form](https://forms.gle/72Bpeh48P7zQimin7)]** **[[Try-On Demo](https://ailb-web.ing.unimore.it/dress-code)]**\n\n**IMPORTANT!**\n- By making any use of the Dress Code Dataset, you accept and agree to comply with the terms and conditions reported [here](https://github.com/aimagelab/dress-code/blob/main/LICENCE).\n- The dataset will not be released to private companies. \n- When filling the dataset request form, non-institutional emails (e.g. gmail.com, qq.com, etc.) are not allowed.\n- The signed release agreement form is mandatory (see the dataset request form for more details). Incomplete or unsigned release agreement forms are not accepted and will not receive a response. Typed signatures are not allowed.\n\n**Requests are manually validated on a weekly basis. If you do not receive a response, your request does not meet the outlined requirements.**\n\n\u003chr\u003e\n\nPlease cite with the following BibTeX:\n\n```\n@inproceedings{morelli2022dresscode,\n  title={{Dress Code: High-Resolution Multi-Category Virtual Try-On}},\n  author={Morelli, Davide and Fincato, Matteo and Cornia, Marcella and Landi, Federico and Cesari, Fabio and Cucchiara, Rita},\n  booktitle={Proceedings of the European Conference on Computer Vision},\n  year={2022}\n}\n```\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"images/dressCodePrev.gif\" style=\"max-width: 800px; width: 80%\"/\u003e\n\u003c/p\u003e\n\n## Dataset\n\nWe collected a new dataset for image-based virtual try-on composed of image pairs coming from different catalogs of YOOX NET-A-PORTER. \u003c/br\u003e\nThe dataset contains more than 50k high resolution model clothing images pairs divided into three different categories (*i.e.* dresses, upper-body clothes, lower-body clothes).\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"images/dataset_comparison.gif\" style=\"max-width: 800px; width: 80%\"\u003e\n\u003c/p\u003e\n\n### Summary\n- 53792 garments\n- 107584 images\n- 3 categories\n  - upper body\n  - lower body\n  - dresses\n- 1024 x 768 image resolution\n- additional info\n  - keypoints\n  - skeletons\n  - human label maps\n  - human dense poses\n\n### Additional Info\nAlong with model and garment image pair, we provide also the keypoints, skeleton, human label map, and dense pose. \n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"images/addittional_infos.png\" style=\"max-width: 800px; width: 80%\"/\u003e\n\u003c/p\u003e\n\n\u003cdetails\u003e\u003csummary\u003eMore info\u003c/summary\u003e\n\n### Keypoints\nFor all image pairs of the dataset, we stored the joint coordinates of human poses.\nIn particular, we used [OpenPose](https://github.com/Hzzone/pytorch-openpose) [1] to extract 18 keypoints for each human body. \n\nFor each image, we provided a json file containing a dictionary with the `keypoints` key.\nThe value of this key is a list of 18 elements, representing the joints of the human body. Each element is a list of 4 values, where the first two indicate the coordinates on the x and y axis respectively.\n\n### Skeletons\nSkeletons are RGB images obtained connecting keypoints with lines.\n\n### Human Label Map\n\nWe employed a human parser to assign each pixel of the image to a specific category thus obtaining a segmentation mask for each target model. \nSpecifically, we used the [SCHP model](https://github.com/PeikeLi/Self-Correction-Human-Parsing) [2] trained on the ATR dataset, a large single person human parsing dataset focused on fashion images with 18 classes.\n\nObtained images are composed of 1 channel filled with the category label value. \nCategories are mapped as follows:\n\n```ruby\n 0    background\n 1    hat\n 2    hair\n 3    sunglasses\n 4    upper_clothes\n 5    skirt\n 6    pants\n 7    dress\n 8    belt\n 9    left_shoe\n10    right_shoe\n11    head\n12    left_leg\n13    right_leg\n14    left_arm\n15    right_arm\n16    bag\n17    scarf\n```\n\n\n### Human Dense Pose\n\nWe also extracted dense label and UV mapping from all the model images using [DensePose](https://github.com/facebookresearch/detectron2/tree/main/projects/DensePose) [3].\n\n\u003c/details\u003e\n\n## Experimental Results\n\n### Low Resolution 256 x 192\n\u003ctable\u003e\n\u003c!-- TABLE BODY --\u003e\n\u003ctbody\u003e\n  \u003c!-- TABLE HEADER --\u003e\n    \u003cth valign=\"bottom\"\u003eName\u003c/th\u003e\n    \u003cth valign=\"bottom\"\u003eSSIM\u003c/th\u003e\n    \u003cth valign=\"bottom\"\u003eFID\u003c/th\u003e\n    \u003cth valign=\"bottom\"\u003eKID\u003c/th\u003e\n  \u003c!-- ROW: CP VTON --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eCP-VTON [4]\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.803\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e35.16\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e2.245\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c!-- ROW: CP VTON+ --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eCP-VTON+ [5]\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.902\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e25.19\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e1.586\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c!-- ROW: CP VTON' --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eCP-VTON* [4]\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.874\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e18.99\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e1.117\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c!-- ROW: FPAFN --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003ePFAFN [6]\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.902\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e14.38\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.743\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c!-- ROW: VITON GT --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eVITON-GT [7]\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.899\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e13.80\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.711\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c!-- ROW: WUTON --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eWUTON [8]\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.902\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e13.28\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.771\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c!-- ROW: ACGPN --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eACGPN [9]\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.868\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e13.79\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.818\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c!-- ROW: OURS PSAD --\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eOURS\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.906\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e11.40\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e0.570\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n## Code\nDue to a firm collaboration, we cannot release the code. However, we supply an empty Pytorch project to load data.\n## References\n\n[1] Cao, et al. \"OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields.\" IEEE TPAMI, 2019.\n\n[2] Li, et al. \"Self-Correction for Human Parsing.\" arXiv, 2019.\n\n[3] Güler, et al. \"Densepose: Dense human pose estimation in the wild.\" CVPR, 2018.\n\n[4] Wang, et al. \"Toward Characteristic-Preserving Image-based Virtual Try-On Network.\" ECCV, 2018.\n\n[5] Minar, et al. \"CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On.\" CVPR Workshops, 2020.\n\n[6] Ge, et al. \"Parser-Free Virtual Try-On via Distilling Appearance Flows.\" CVPR, 2021.\n\n[7] Fincato, et al. \"VITON-GT: An Image-based Virtual Try-On Model with Geometric Transformations.\" ICPR, 2020.\n\n[8] Issenhuth, el al. \"Do Not Mask What You Do Not Need to Mask: a Parser-Free Virtual Try-On.\" ECCV, 2020.\n\n[9] Yang, et al. \"Towards Photo-Realistic Virtual Try-On by Adaptively Generating-Preserving Image Content.\" CVPR, 2020.\n\n## Contact\n\nIf you have any general doubt about our dataset, please use the [public issues section](https://github.com/aimagelab/dress-code/issues) on this github repo. Alternatively, drop us an e-mail at davide.morelli [at] unimore.it or marcella.cornia [at] unimore.it.\n","funding_links":[],"categories":["Tools \u0026 Frameworks","Python"],"sub_categories":["Clothing(Visual Try on)"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faimagelab%2Fdress-code","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faimagelab%2Fdress-code","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faimagelab%2Fdress-code/lists"}