https://github.com/nashory/dense-equi-torch
Torch7 implementation of Unsupervised object learning from dense equivariant image labelling
https://github.com/nashory/dense-equi-torch
celeba landmark siamese-network unsupervised
Last synced: 5 months ago
JSON representation
Torch7 implementation of Unsupervised object learning from dense equivariant image labelling
- Host: GitHub
- URL: https://github.com/nashory/dense-equi-torch
- Owner: nashory
- Created: 2017-09-06T05:24:24.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2017-11-16T09:44:24.000Z (almost 8 years ago)
- Last Synced: 2025-04-02T20:38:32.267Z (6 months ago)
- Topics: celeba, landmark, siamese-network, unsupervised
- Language: Lua
- Homepage:
- Size: 46.9 KB
- Stars: 11
- Watchers: 4
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# dense-equi-torch
Torch7 implementation of ["Unsupervised object learning from dense equivariant image labelling"](https://arxiv.org/abs/1706.02932)___Note: I am working on training/test regressor code to make cleaner version.___
___but pretraining the network for latent space mapping works prefectly now. (don't worry :))___# Prerequisites
+ Torch7
+ [thinplatspline](https://github.com/olt/thinplatespline)
+ python 2.7
+ other torch packages (xlua, display, hdf5, image ...)~~~
loarocks install display
loarocks install hdf5
loarocks install image
loarocks install xlua
~~~# Usage
first, download CelebA dataset [(here)](https://drive.google.com/drive/folders/0B7EVK8r0v71pWEZsZE9oNnFzTm8).
~~~|-- image 1
|-- image 2
|-- iamge 3 ...
~~~To train the feature extractor(CNN):
~~~
1. change options in "script/opts.lua" and "data/gen_tps.py"
2. do "th pretrain.lua"
>> pretrained model will saved in 'repo/pretrain/'
~~~To train the regressor(mlp):
~~~
1. change options in "script/opts.lua" and "data/gen_reg.lua"
2. do "th regtrain.lua"
>> trained regressor will saved in 'repo/regressor/'
~~~To test the regressor(mlp):
~~~
1. change options in "script/opts.lua"
2. do "th regtest.lua"
>> test image wih landmarks will be saved in 'repo/test'
~~~# Results
### (1) mapping on the latent space
+ Red : left-mouth
+ Purple : right-mouth
+ Green : nose
+ Blue : left-eye
+ Orange : right-eye(https://plot.ly/~stellastra666/156/)
(https://plot.ly/~stellastra666/162/)
### (2) landmark detection on CelebA
1. good case (red: predict / green: GT)







2. badcase







### (3) Performance Benchmark
__1. Original paper__
| nLandmark | regressor training | IOD error|
| ---- | --- |---|
|10|CelebA|6.32|
|30|CelebA|5.76|
|50|CelebA|5.33|__2. My code__
| nLandmark | regressor training |Iter(reg) | MSE | IOD error|
| ---- | --- |---|---|---|
|100|CelebA|5K| 3.15|5.71|
|100|CelebA|50K| 3.31|5.67|### (4) Effect of training data when fine-tuning regressor(mlp)
|Training images| learning iter | training loss | MSE | IOD error|
|---|---|---|---|---|
| 10 | 1K | 0.04 |5.67 | 9.97 |
| 50 | 1K | 0.09 |4.73 | 8.07 |
| 100 | 1K | 0.13 |4.42 | 8.13 |
| 2000 | 2K | 0.18 |3.38 | 6.28 |
| 5000 | 3K | 0.20 |3.36 | 5.84 |
| 15000 | 5K | 0.21 |3.15 | 5.71 |
| 15000 | 50K | 0.21 |3.31 | 5.67 |# ACKNOWLEDGEMENT
Thank James for kindly answering my inquries and providing pieces of matlab code :)# Author
MinchulShin / [@nashory](https://github.com/nashory)