{"id":13936652,"url":"https://github.com/carpedm20/simulated-unsupervised-tensorflow","last_synced_at":"2025-10-10T05:11:48.618Z","repository":{"id":41207482,"uuid":"77440857","full_name":"carpedm20/simulated-unsupervised-tensorflow","owner":"carpedm20","description":"TensorFlow implementation of \"Learning from Simulated and Unsupervised Images through Adversarial Training\"","archived":false,"fork":false,"pushed_at":"2019-12-10T02:36:54.000Z","size":1929,"stargazers_count":577,"open_issues_count":19,"forks_count":145,"subscribers_count":30,"default_branch":"master","last_synced_at":"2025-03-29T01:09:54.240Z","etag":null,"topics":["apple","deep-learning","generative-model","synthetic-images","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/carpedm20.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2016-12-27T08:51:15.000Z","updated_at":"2025-02-17T05:01:24.000Z","dependencies_parsed_at":"2022-08-26T04:51:04.569Z","dependency_job_id":null,"html_url":"https://github.com/carpedm20/simulated-unsupervised-tensorflow","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carpedm20%2Fsimulated-unsupervised-tensorflow","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carpedm20%2Fsimulated-unsupervised-tensorflow/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carpedm20%2Fsimulated-unsupervised-tensorflow/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carpedm20%2Fsimulated-unsupervised-tensorflow/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/carpedm20","download_url":"https://codeload.github.com/carpedm20/simulated-unsupervised-tensorflow/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247276164,"owners_count":20912288,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["apple","deep-learning","generative-model","synthetic-images","tensorflow"],"created_at":"2024-08-07T23:02:53.282Z","updated_at":"2025-10-10T05:11:43.564Z","avatar_url":"https://github.com/carpedm20.png","language":"Python","readme":"# Simulated+Unsupervised (S+U) Learning in TensorFlow\n\nTensorFlow implementation of [Learning from Simulated and Unsupervised Images through Adversarial Training](https://arxiv.org/abs/1612.07828).\n\n![model](./assets/SimGAN.png)\n\n\n## Requirements\n\n- Python 2.7\n- [TensorFlow](https://www.tensorflow.org/) 0.12.1\n- [SciPy](http://www.scipy.org/install.html)\n- [pillow](https://github.com/python-pillow/Pillow)\n- [tqdm](https://github.com/tqdm/tqdm)\n\n## Usage\n\nTo generate synthetic dataset:\n\n1. Run [UnityEyes](http://www.cl.cam.ac.uk/research/rainbow/projects/unityeyes/) with changing `resolution` to `640x480` and `Camera parameters` to `[0, 0, 20, 40]`.\n2. Move generated images and json files into `data/gaze/UnityEyes`.\n\nThe `data` directory should looks like:\n\n    data\n    ├── gaze\n    │   ├── MPIIGaze\n    │   │   └── Data\n    │   │       └── Normalized\n    │   │           ├── p00\n    │   │           ├── p01\n    │   │           └── ...\n    │   └── UnityEyes # contains images of UnityEyes\n    │       ├── 1.jpg\n    │       ├── 1.json\n    │       ├── 2.jpg\n    │       ├── 2.json\n    │       └── ...\n    ├── __init__.py\n    ├── gaze_data.py\n    ├── hand_data.py\n    └── utils.py\n\nTo train a model (samples will be generated in `samples` directory):\n\n    $ python main.py\n    $ tensorboard --logdir=logs --host=0.0.0.0\n\nTo refine all synthetic images with a pretrained model:\n\n    $ python main.py --is_train=False --synthetic_image_dir=\"./data/gaze/UnityEyes/\"\n\n\n## Training results\n\n\n### Differences with the paper\n\n- Used Adam and Stochatstic Gradient Descent optimizer.\n- Only used 83K (14% of 1.2M used by the paper) synthetic images from `UnityEyes`.\n- Manually choose hyperparameters for `B` and `lambda` because those are not specified in the paper.\n\n\n### Experiments #1\n\nFor these synthetic images,\n\n![UnityEyes_sample](./assets/UnityEyes_samples1.png)\n\nResult of `lambda=1.0` with `optimizer=sgd` after 8,000 steps.\n\n    $ python main.py --reg_scale=1.0 --optimizer=sgd\n\n![Refined_sample_with_lambd=1.0](./assets/lambda=1.0_optimizer=sgd.png)\n\nResult of `lambda=0.5` with `optimizer=sgd` after 8,000 steps.\n\n    $ python main.py --reg_scale=0.5 --optimizer=sgd\n\n![Refined_sample_with_lambd=1.0](./assets/lambda=0.5_optimizer=sgd.png)\n\nTraining loss of discriminator and refiner when `lambda` is `1.0` (green) and `0.5` (yellow).\n\n![loss](./assets/loss_lambda=1.0,0.5_optimizer=sgd.png)\n\n\n### Experiments #2\n\nFor these synthetic images,\n\n![UnityEyes_sample](./assets/UnityEyes_samples2.png)\n\nResult of `lambda=1.0` with `optimizer=adam` after 4,000 steps.\n\n    $ python main.py --reg_scale=1.0 --optimizer=adam\n\n![Refined_sample_with_lambd=1.0](./assets/lambda=1.0_optimizer=adam.png)\n\nResult of `lambda=0.5` with `optimizer=adam` after 4,000 steps.\n\n    $ python main.py --reg_scale=0.5 --optimizer=adam\n\n![Refined_sample_with_lambd=0.5](./assets/lambda=0.5_optimizer=adam.png)\n\nResult of `lambda=0.1` with `optimizer=adam` after 4,000 steps.\n\n    $ python main.py --reg_scale=0.1 --optimizer=adam\n\n![Refined_sample_with_lambd=0.1](./assets/lambda=0.1_optimizer=adam.png)\n\nTraining loss of discriminator and refiner when `lambda` is `1.0` (blue), `0.5` (purple) and `0.1` (green).\n\n![loss](./assets/loss_lambda=1.0,0.5,0.1_optimizer=adam.png)\n\n\n## Author\n\nTaehoon Kim / [@carpedm20](http://carpedm20.github.io)\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcarpedm20%2Fsimulated-unsupervised-tensorflow","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcarpedm20%2Fsimulated-unsupervised-tensorflow","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcarpedm20%2Fsimulated-unsupervised-tensorflow/lists"}