{"id":13626874,"url":"https://github.com/layumi/Pedestrian_Alignment","last_synced_at":"2025-04-16T19:30:53.964Z","repository":{"id":93935946,"uuid":"96018179","full_name":"layumi/Pedestrian_Alignment","owner":"layumi","description":"TCSVT2018 Pedestrian Alignment Network for Large-scale Person Re-identification","archived":false,"fork":false,"pushed_at":"2021-07-21T03:21:16.000Z","size":5361,"stargazers_count":235,"open_issues_count":13,"forks_count":66,"subscribers_count":13,"default_branch":"master","last_synced_at":"2024-10-25T04:24:49.019Z","etag":null,"topics":["alignment","matconvnet","matlab","person-re-identification","person-recognition","person-reid","re-identification","reid","reidentification"],"latest_commit_sha":null,"homepage":"https://ieeexplore.ieee.org/document/8481710","language":"Cuda","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/layumi.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2017-07-02T11:12:40.000Z","updated_at":"2024-10-17T06:09:27.000Z","dependencies_parsed_at":"2023-03-16T13:15:09.671Z","dependency_job_id":null,"html_url":"https://github.com/layumi/Pedestrian_Alignment","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/layumi%2FPedestrian_Alignment","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/layumi%2FPedestrian_Alignment/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/layumi%2FPedestrian_Alignment/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/layumi%2FPedestrian_Alignment/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/layumi","download_url":"https://codeload.github.com/layumi/Pedestrian_Alignment/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223549117,"owners_count":17163600,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["alignment","matconvnet","matlab","person-re-identification","person-recognition","person-reid","re-identification","reid","reidentification"],"created_at":"2024-08-01T22:00:24.006Z","updated_at":"2024-11-08T17:30:41.954Z","avatar_url":"https://github.com/layumi.png","language":"Cuda","readme":"# Pedestrian Alignment Network for Person Re-identification\n\nThis repo is for our IEEE TCSVT paper \n\n[[10min Intro Youtube]](https://www.youtube.com/watch?v=OJR43TzS3a8\u0026t=41s\u0026ab_channel=IEEETransonCircuits%26SystemforVideoTech)\n\n(ArXiv Link: https://arxiv.org/abs/1707.00408 IEEE Link: https://ieeexplore.ieee.org/document/8481710). \n\nThe main idea is to align the pedestrian within the bboxes, and reduce the noisy factors, i.e., scale and pose variances.\n\n## Network Structure\n![](https://github.com/layumi/Pedestrian_Alignment/blob/master/fig2.jpg)\nFor more details, you can see this [png file](https://raw.githubusercontent.com/layumi/Pedestrian_Alignment/master/PAN.png). But it is low-solution now, and I may replace it recently.\n\n## Installation\n1.Clone this repo.\n\n    git clone https://github.com/layumi/Pedestrian_Alignment.git\n    cd Pedestrian_Alignment\n    mkdir data\n\n2.Download the pre-trained model. Put it into './data'.\n\n    cd data\n    wget http://www.vlfeat.org/matconvnet/models/imagenet-resnet-50-dag.mat\n    \n3.Compile Matconvnet\n**(Note that I have included my Matconvnet in this repo, so you do not need to download it again. I have changed some codes comparing with the original version. For example, one of the difference is in `/matlab/+dagnn/@DagNN/initParams.m`. If one layer has params, I will not initialize it again, especially for pretrained model.)**\n\nYou just need to uncomment and modify some lines in `gpu_compile.m` and run it in Matlab. Try it~\n(The code does not support cudnn 6.0. You may just turn off the Enablecudnn or try cudnn5.1)\n\nIf you fail in compilation, you may refer to http://www.vlfeat.org/matconvnet/install/\n    \n## Dataset\nDownload [Market1501 Dataset](http://www.liangzheng.org/Project/project_reid.html). [[Google]](https://drive.google.com/file/d/0B8-rUzbwVRk0c054eEozWG9COHM/view) [[Baidu]](https://pan.baidu.com/s/1ntIi2Op)\n\nFor training CUHK03, we follow the new evaluation protocol in the [CVPR2017 paper](https://github.com/zhunzhong07/person-re-ranking). It conducts a multi-shot person re-ID evaluation and only needs to run one time.\n\n## Train\n1. Add your dataset path into `prepare_data.m` and run it. Make sure the code outputs the right image path.\n\n2. uncomment https://github.com/layumi/Pedestrian_Alignment/blob/master/resnet52_market.m#L23 \n\nRun `train_id_net_res_market_new.m` to pretrain the base branch.\n\n3. comment https://github.com/layumi/Pedestrian_Alignment/blob/master/resnet52_market.m#L23 \n\nRun `train_id_net_res_market_align.m` to finetune the whole net.\n\n## Test\n1. Run `test/test_gallery_stn_base.m` and `test/test_gallery_stn_align.m` to extract the image features from base branch and alignment brach. Note that you need to change the dir path in the code. They will store in a .mat file. Then you can use it to do the evaluation.\n\n2. Evaluate feature on the Market-1501. Run `evaluation/zzd_evaluation_res_faster.m`. You can get a Single-query Result around the following result.\n\n| Methods               | Rank@1 | mAP    | \n| --------              | -----  | ----   | \n| Ours           | 82.81% | 63.35% | \n\nYou may find our trained model at [GoogleDrive](https://drive.google.com/open?id=1X09jnURIicQk7ivHjVkq55NHPB86hQT0)\n## Visualize Results\nWe conduct an extra interesting experiment:\n**When zooming in the input image (adding scale variance), how does our alignment network react?**\n\nWe can observe a robust transform on the output image (focusing on the human body and keeping the scale).\n\nThe left image is the input; The right image is the output of our network.\n\n![](https://github.com/layumi/Person_re-ID_stn/blob/master/gif/0018_c4s1_002351_02_zoomin.gif)\n    ![](https://github.com/layumi/Person_re-ID_stn/blob/master/gif/0153_c4s1_026076_03_zoomin.gif)\n    ![](https://github.com/layumi/Pedestrian_Alignment/blob/master/gif/0520_c4s3_001373_03_zoomin.gif)\n\n\n![](https://github.com/layumi/Pedestrian_Alignment/blob/master/gif/0520_c5s1_143995_06_zoomin.gif)\n    ![](https://github.com/layumi/Pedestrian_Alignment/blob/master/gif/0345_c6s1_079326_07_zoomin.gif)\n    ![](https://github.com/layumi/Pedestrian_Alignment/blob/master/gif/0153_c4s1_025451_01_zoomin.gif)\n\n## Citation\nPlease cite this paper in your publications if it helps your research:\n```\n@article{zheng2017pedestrian,\n  title={Pedestrian Alignment Network for Large-scale Person Re-identification},\n  author={Zheng, Zhedong and Zheng, Liang and Yang, Yi},\n  doi={10.1109/TCSVT.2018.2873599},\n  note={\\mbox{doi}:\\url{10.1109/TCSVT.2018.2873599}},\n  journal={IEEE Transactions on Circuits and Systems for Video Technology},\n  year={2018}\n}\n```\n\n## Acknowledge\nThanks for the suggestions from Qiule Sun.\n","funding_links":[],"categories":["Uncategorized","Codes"],"sub_categories":["Uncategorized"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flayumi%2FPedestrian_Alignment","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flayumi%2FPedestrian_Alignment","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flayumi%2FPedestrian_Alignment/lists"}