{"id":18340897,"url":"https://github.com/tinyvision/solider-humanpose","last_synced_at":"2025-10-30T15:31:36.495Z","repository":{"id":183418665,"uuid":"669049752","full_name":"tinyvision/SOLIDER-HumanPose","owner":"tinyvision","description":null,"archived":false,"fork":false,"pushed_at":"2023-07-21T08:19:56.000Z","size":23810,"stargazers_count":3,"open_issues_count":1,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-12-23T16:45:21.983Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tinyvision.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":".github/CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2023-07-21T08:16:14.000Z","updated_at":"2024-11-11T19:15:05.000Z","dependencies_parsed_at":"2023-07-24T11:51:17.317Z","dependency_job_id":null,"html_url":"https://github.com/tinyvision/SOLIDER-HumanPose","commit_stats":null,"previous_names":["tinyvision/solider-humanpose"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tinyvision%2FSOLIDER-HumanPose","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tinyvision%2FSOLIDER-HumanPose/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tinyvision%2FSOLIDER-HumanPose/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tinyvision%2FSOLIDER-HumanPose/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tinyvision","download_url":"https://codeload.github.com/tinyvision/SOLIDER-HumanPose/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":238994478,"owners_count":19564835,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-05T20:24:36.242Z","updated_at":"2025-10-30T15:31:35.183Z","avatar_url":"https://github.com/tinyvision.png","language":"Python","readme":"# SOLIDER on [Human Pose]\n\nThis repo provides details about how to use [SOLIDER](https://github.com/tinyvision/SOLIDER) pretrained representation on human parsing task.\nWe modify the code from [mmpose](https://github.com/open-mmlab/mmpose), and you can refer to the original repo for more details.\n\n## Installation and Datasets\n\nDetails of installation and dataset preparation can be found in [mmpose-installation](https://mmpose.readthedocs.io/en/latest/installation.html).\n\n## Prepare Pre-trained Models\nStep 1. Download models from [SOLIDER](https://github.com/tinyvision/SOLIDER), or use [SOLIDER](https://github.com/tinyvision/SOLIDER) to train your own models.\n\nSteo 2. Put the pretrained models under the `pretrained` file, and rename their names as `./pretrained/solider_swin_tiny(small/base).pth`\n\n## Training\nTrain with single GPU or multiple GPUs:\n\n```shell\nsh run_train.sh\n```\n\n## Performance\n\n| Method | Model | COCO(AP/AR) |\n| ------ | :---: | :---: | \n| SOLIDER | Swin Tiny | 74.4/79.6 | \n| SOLIDER | Swin Small | 76.3/81.3 | \n| SOLIDER | Swin Base | 76.6/81.5 | \n\n- We use the pretrained models from [SOLIDER](https://github.com/tinyvision/SOLIDER).\n- The semantic weight we used in these experiments is 0.8.\n\n## Citation\n\nIf you find this code useful for your research, please cite our paper\n\n```\n@inproceedings{chen2023beyond,\n  title={Beyond Appearance: a Semantic Controllable Self-Supervised Learning Framework for Human-Centric Visual Tasks},\n  author={Weihua Chen and Xianzhe Xu and Jian Jia and Hao Luo and Yaohua Wang and Fan Wang and Rong Jin and Xiuyu Sun},\n  booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  year={2023},\n}\n```\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftinyvision%2Fsolider-humanpose","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftinyvision%2Fsolider-humanpose","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftinyvision%2Fsolider-humanpose/lists"}