{"id":13564634,"url":"https://github.com/NVlabs/DG-Net","last_synced_at":"2025-04-03T21:31:18.572Z","repository":{"id":38238116,"uuid":"194320424","full_name":"NVlabs/DG-Net","owner":"NVlabs","description":":couple: Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral) :couple:","archived":false,"fork":false,"pushed_at":"2023-07-09T10:24:47.000Z","size":7518,"stargazers_count":1292,"open_issues_count":28,"forks_count":226,"subscribers_count":29,"default_branch":"master","last_synced_at":"2025-04-01T15:14:10.135Z","etag":null,"topics":["apex","cuhk-np","dg-net","dukemtmc-reid","image-retrieval","image-search","market-1501","msmt17","open-reid","person-reid","person-reidentification","pytorch","re-identification"],"latest_commit_sha":null,"homepage":"https://www.zdzheng.xyz/publication/Joint-di2019","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NVlabs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2019-06-28T18:53:05.000Z","updated_at":"2025-03-31T20:32:41.000Z","dependencies_parsed_at":"2022-08-09T01:16:57.445Z","dependency_job_id":"0b811b87-ff39-4db8-91ac-81509cd93de8","html_url":"https://github.com/NVlabs/DG-Net","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FDG-Net","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FDG-Net/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FDG-Net/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FDG-Net/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NVlabs","download_url":"https://codeload.github.com/NVlabs/DG-Net/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247083233,"owners_count":20880799,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["apex","cuhk-np","dg-net","dukemtmc-reid","image-retrieval","image-search","market-1501","msmt17","open-reid","person-reid","person-reidentification","pytorch","re-identification"],"created_at":"2024-08-01T13:01:33.904Z","updated_at":"2025-10-14T12:10:01.655Z","avatar_url":"https://github.com/NVlabs.png","language":"Python","readme":"[![License CC BY-NC-SA 4.0](https://img.shields.io/badge/license-CC4.0-blue.svg)](https://raw.githubusercontent.com/nvlabs/SPADE/master/LICENSE.md)\n![Python 3.6](https://img.shields.io/badge/python-3.6-green.svg)\n[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/NVlabs/DG-Net.svg?logo=lgtm\u0026logoWidth=18)](https://lgtm.com/projects/g/NVlabs/DG-Net/context:python)\n\n## Joint Discriminative and Generative Learning for Person Re-identification\n![](NxN.jpg)\n![](Demo.gif)\n\n[[Project]](http://zdzheng.xyz/DG-Net/) [[Paper]](https://arxiv.org/abs/1904.07223) [[YouTube]](https://www.youtube.com/watch?v=ubCrEAIpQs4) [[Bilibili]](https://www.bilibili.com/video/av51439240) [[Poster]](http://zdzheng.xyz/files/DGNet_poster.pdf)\n[[Supp]](http://jankautz.com/publications/JointReID_CVPR19_supp.pdf)\n\nJoint Discriminative and Generative Learning for Person Re-identification, CVPR 2019 (Oral)\u003cbr\u003e\n[Zhedong Zheng](http://zdzheng.xyz/), [Xiaodong Yang](https://xiaodongyang.org/), [Zhiding Yu](https://chrisding.github.io/), [Liang Zheng](http://liangzheng.com.cn/), [Yi Yang](https://www.uts.edu.au/staff/yi.yang), [Jan Kautz](http://jankautz.com/) \u003cbr\u003e\n\n## Table of contents\n* [News](#news)\n* [Features](#features)\n* [Prerequisites](#prerequisites)\n* [Getting Started](#getting-started)\n    * [Installation](#installation)\n    * [Dataset Preparation](#dataset-preparation)\n    * [Testing](#testing)\n    * [Training](#training)\n* [DG-Market](#dg-market)\n* [Tips](#tips)\n* [Citation](#citation)\n* [Related Work](#related-work)\n* [License](#license)\n\n## News\n- 02/18/2021: We release [DG-Net++](https://github.com/NVlabs/DG-Net-PP): the extention of DG-Net for unsupervised cross-domain re-id.\n- 08/24/2019: We add the direct transfer learning results of DG-Net [here](https://github.com/NVlabs/DG-Net#person-re-id-evaluation).\n- 08/01/2019: We add the support of multi-GPU training: `python train.py --config configs/latest.yaml  --gpu_ids 0,1`.\n\n## Features\nWe have supported:\n- Multi-GPU training (fp32)\n- [APEX](https://github.com/NVIDIA/apex) to save GPU memory (fp16/fp32)\n- Multi-query evaluation\n- Random erasing\n- Visualize training curves\n- Generate all figures in the paper \n\n## Prerequisites\n\n- Python 3.6\n- GPU memory \u003e= 15G (fp32)\n- GPU memory \u003e= 10G (fp16/fp32)\n- NumPy\n- PyTorch 1.0+\n- [Optional] APEX (fp16/fp32)\n\n## Getting Started\n### Installation\n- Install [PyTorch](http://pytorch.org/) \n- Install torchvision from the source:\n```\ngit clone https://github.com/pytorch/vision\ncd vision\npython setup.py install\n```\n- [Optional] You may skip it. Install APEX from the source:\n```\ngit clone https://github.com/NVIDIA/apex.git\ncd apex\npython setup.py install --cuda_ext --cpp_ext\n```\n- Clone this repo:\n```bash\ngit clone https://github.com/NVlabs/DG-Net.git\ncd DG-Net/\n```\n\nOur code is tested on PyTorch 1.0.0+ and torchvision 0.2.1+ .\n\n### Dataset Preparation\nDownload the dataset [Market-1501](http://www.liangzheng.com.cn/Project/project_reid.html) [[Google Drive]](https://drive.google.com/file/d/0B8-rUzbwVRk0c054eEozWG9COHM/view) [[Baidu Disk]](https://pan.baidu.com/s/1ntIi2Op)\n\nPreparation: put the images with the same id in one folder. You may use \n```bash\npython prepare-market.py          # for Market-1501\n```\nNote to modify the dataset path to your own path.\n\n### Testing\n\n#### Download the trained model\nWe provide our trained model. You may download it from [Google Drive](https://drive.google.com/open?id=1lL18FZX1uZMWKzaZOuPe3IuAdfUYyJKH) (or [Baidu Disk](https://pan.baidu.com/s/1503831XfW0y4g3PHir91yw) password: rqvf). You may download and move it to the `outputs`.\n```\n├── outputs/\n│   ├── E0.5new_reid0.5_w30000\n├── models\n│   ├── best/                   \n```\n#### Person re-id evaluation\n- Supervised learning\n\n|   | Market-1501  | DukeMTMC-reID  | MSMT17  | CUHK03-NP |\n|---|--------------|----------------|----------|-----------|\n| Rank@1 | 94.8% | 86.6% | 77.2% | 65.6% |\n| mAP    | 86.0% | 74.8% | 52.3% | 61.1% |\n\n\n- Direct transfer learning     \nTo verify the generalizability of DG-Net, we train the model on dataset A and directly test the model on dataset B (with no adaptation). \nWe denote the direct transfer learning protocol as `A→B`.\n\n|   |Market→Duke|Duke→Market|Market→MSMT|MSMT→Market|Duke→MSMT|MSMT→Duke|\n|---|----------------|----------------| -------------- |----------------| -------------- |----------------|\n| Rank@1  | 42.62%   | 56.12%         | 17.11%         | 61.76%         | 20.59%         | 61.89%         |\n| Rank@5  | 58.57%   | 72.18%         | 26.66%         | 77.67%         | 31.67%         | 75.81%         |\n| Rank@10 | 64.63%   | 78.12%         | 31.62%         | 83.25%         | 37.04%         | 80.34%         |\n| mAP     | 24.25%   | 26.83%         | 5.41%          | 33.62%         | 6.35%          | 40.69%         |\n\n\n#### Image generation evaluation\n\nPlease check the `README.md` in the `./visual_tools`. \n\nYou may use the `./visual_tools/test_folder.py` to generate lots of images and then do the evaluation. The only thing you need to modify is the data path in [SSIM](https://github.com/layumi/PerceptualSimilarity) and [FID](https://github.com/layumi/TTUR).\n\n### Training\n\n#### Train a teacher model\nYou may directly download our trained teacher model from [Google Drive](https://drive.google.com/open?id=1lL18FZX1uZMWKzaZOuPe3IuAdfUYyJKH) (or [Baidu Disk](https://pan.baidu.com/s/1503831XfW0y4g3PHir91yw) password: rqvf).\nIf you want to have it trained by yourself, please check the [person re-id baseline](https://github.com/layumi/Person_reID_baseline_pytorch) repository to train a teacher model, then copy and put it in the `./models`.\n```\n├── models/\n│   ├── best/                   /* teacher model for Market-1501\n│       ├── net_last.pth        /* model file\n│       ├── ...\n```\n\n#### Train DG-Net \n1. Setup the yaml file. Check out `configs/latest.yaml`. Change the data_root field to the path of your prepared folder-based dataset, e.g. `../Market-1501/pytorch`.\n\n\n2. Start training\n```\npython train.py --config configs/latest.yaml\n```\nOr train with low precision (fp16)\n```\npython train.py --config configs/latest-fp16.yaml\n```\nIntermediate image outputs and model binary files are saved in `outputs/latest`.\n\n3. Check the loss log\n```\n tensorboard --logdir logs/latest\n```\n\n## DG-Market\n![](https://github.com/layumi/DG-Net/blob/gh-pages/index_files/DGMarket-logo.png)\n\nWe provide our generated images and make a large-scale synthetic dataset called DG-Market. This dataset is generated by our DG-Net and consists of 128,307 images (613MB), about 10 times larger than the training set of original Market-1501 (even much more can be generated with DG-Net). It can be used as a source of unlabeled training dataset for semi-supervised learning. You may download the dataset from [Google Drive](https://drive.google.com/file/d/126Gn90Tzpk3zWp2c7OBYPKc-ZjhptKDo/view?usp=sharing) (or [Baidu Disk](https://pan.baidu.com/s/1n4M6s-qvE08J8SOOWtWfgw) password: qxyh).  \n\n|   |  DG-Market   | Market-1501 (training) |\n|---|--------------|-------------|\n| #identity| \t-   |  751        |\n| #images| 128,307 |  12,936     |\n\nQuick Download via [gdrive](https://github.com/prasmussen/gdrive)\n```bash\nwget https://github.com/prasmussen/gdrive/releases/download/2.1.1/gdrive_2.1.1_linux_386.tar.gz\ntar -xzvf gdrive_2.1.1_linux_386.tar.gz\ngdrive download 126Gn90Tzpk3zWp2c7OBYPKc-ZjhptKDo\nunzip DG-Market.zip\n```\n\n## Tips\nNote the format of camera id and number of cameras. For some datasets (e.g., MSMT17), there are more than 10 cameras. You need to modify the preparation and evaluation code to read the double-digit camera id. For some vehicle re-id datasets (e.g., VeRi) having different naming rules, you also need to modify the preparation and evaluation code.\n\n## Citation\nPlease cite this paper if it helps your research:\n```bibtex\n@inproceedings{zheng2019joint,\n  title={Joint discriminative and generative learning for person re-identification},\n  author={Zheng, Zhedong and Yang, Xiaodong and Yu, Zhiding and Zheng, Liang and Yang, Yi and Kautz, Jan},\n  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n  year={2019}\n}\n```\n\n## Related Work\nOther GAN-based methods compared in the paper include [LSGAN](https://github.com/layumi/DCGAN-pytorch), [FDGAN](https://github.com/layumi/FD-GAN) and [PG2GAN](https://github.com/charliememory/Pose-Guided-Person-Image-Generation). We forked the code and made some changes for evaluatation, thank the authors for their great work. We would also like to thank to the great projects in [person re-id baseline](https://github.com/layumi/Person_reID_baseline_pytorch), [MUNIT](https://github.com/NVlabs/MUNIT) and [DRIT](https://github.com/HsinYingLee/DRIT).\n\n## License\nCopyright (C) 2019 NVIDIA Corporation. All rights reserved. Licensed under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) (**Attribution-NonCommercial-ShareAlike 4.0 International**). The code is released for academic research use only. For commercial use, please contact [researchinquiries@nvidia.com](researchinquiries@nvidia.com).\n","funding_links":[],"categories":["Python","Uncategorized","5. Generation of synthetic content","Paper implementations｜论文实现","Paper implementations"],"sub_categories":["Uncategorized","5.3 Generation Images","Other libraries｜其他库:","Other libraries:"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVlabs%2FDG-Net","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FNVlabs%2FDG-Net","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVlabs%2FDG-Net/lists"}