Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/layumi/University1652-triplet-loss
triplet loss with hard negative / soft margin for the University-1652 dataset.
https://github.com/layumi/University1652-triplet-loss
dataset drone university-1652
Last synced: about 2 months ago
JSON representation
triplet loss with hard negative / soft margin for the University-1652 dataset.
- Host: GitHub
- URL: https://github.com/layumi/University1652-triplet-loss
- Owner: layumi
- Created: 2019-11-10T09:39:54.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2020-07-12T01:36:14.000Z (over 4 years ago)
- Last Synced: 2024-11-15T01:50:32.419Z (about 2 months ago)
- Topics: dataset, drone, university-1652
- Language: Python
- Homepage:
- Size: 55.7 KB
- Stars: 8
- Watchers: 3
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## University-1652_triplet-loss
Baseline Code (with bottleneck) for University-1652 (pytorch).
Any suggestion is welcomed.
## Model Structure
You may learn more from `model.py`. We use the L2-norm 2048-dim feature as the input.## Tips
- I did not optimize the code. I strongly suggest use fp16 and use `with torch.no_grad()`. I will update the code later.
- Larger margin may lead to a worse local minimum. (margin = 0.1-0.3 may provide a better result.)
- Per-class sampler (Satisfied sampler)is not neccessary.
- Adam optimizer is not neccessary.## Prerequisites
- Python 3.6
- GPU Memory >= 6G
- Numpy
- Pytorch 0.3+**(Some reports found that updating numpy can arrive the right accuracy. If you only get 50~80 Top1 Accuracy, just try it.)**
We have successfully run the code based on numpy 1.12.1 and 1.13.1 .## Getting started
### Installation
- Install Pytorch from http://pytorch.org/
- Install Torchvision from the source
```
git clone https://github.com/pytorch/vision
cd vision
python setup.py install
```
Because pytorch and torchvision are ongoing projects.Here we noted that our code is tested based on Pytorch 0.3.0/0.4.0 and Torchvision 0.2.0.
## Dataset & Preparation
Download [Market1501 Dataset](http://www.liangzheng.org/Project/project_reid.html)Preparation: Put the images with the same id in one folder. You may use
```bash
python prepare.py
```
Remember to change the dataset path to your own path.Futhermore, you also can test our code on [DukeMTMC-reID Dataset](https://github.com/layumi/DukeMTMC-reID_evaluation).
Our baseline code is not such high on DukeMTMC-reID **Rank@1=64.23%, mAP=43.92%**. Hyperparameters are need to be tuned.To save trained model, we make a dir.
```bash
mkdir model
```## Train
Train a model by
```bash
python train_new.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32 --data_dir your_data_path
```
`--gpu_ids` which gpu to run.`--name` the name of model.
`--data_dir` the path of the training data.
`--train_all` using all images to train.
`--batchsize` batch size.
`--erasing_p` random erasing probability.
Train a model with random erasing by
```bash
python train_new.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32 --data_dir your_data_path --erasing_p 0.5
```## Test
Use trained model to extract feature by
```bash
python test.py --gpu_ids 0 --name ft_ResNet50 --test_dir your_data_path --which_epoch 59
```
`--gpu_ids` which gpu to run.`--name` the dir name of trained model.
`--which_epoch` select the i-th model.
`--data_dir` the path of the testing data.
## Evaluation
```bash
python evaluate.py
```
It will output Rank@1, Rank@5, Rank@10 and mAP results.
You may also try `evaluate_gpu.py` to conduct a faster evaluation with GPU.For mAP calculation, you also can refer to the [C++ code for Oxford Building](http://www.robots.ox.ac.uk/~vgg/data/oxbuildings/compute_ap.cpp). We use the triangle mAP calculation (consistent with the Market1501 original code).