{"id":13612689,"url":"https://github.com/lisc55/InfoGAN","last_synced_at":"2025-04-13T12:32:34.667Z","repository":{"id":35333646,"uuid":"215918824","full_name":"lisc55/InfoGAN","owner":"lisc55","description":"Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets","archived":false,"fork":false,"pushed_at":"2023-03-25T01:14:45.000Z","size":13696,"stargazers_count":3,"open_issues_count":3,"forks_count":3,"subscribers_count":3,"default_branch":"master","last_synced_at":"2024-08-02T20:44:31.566Z","etag":null,"topics":["cnn","gan","infogan","tensorflow","tensorlayer"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/lisc55.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-10-18T01:45:16.000Z","updated_at":"2021-09-21T16:35:33.000Z","dependencies_parsed_at":"2023-01-15T18:45:13.869Z","dependency_job_id":null,"html_url":"https://github.com/lisc55/InfoGAN","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lisc55%2FInfoGAN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lisc55%2FInfoGAN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lisc55%2FInfoGAN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lisc55%2FInfoGAN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/lisc55","download_url":"https://codeload.github.com/lisc55/InfoGAN/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223585490,"owners_count":17169313,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cnn","gan","infogan","tensorflow","tensorlayer"],"created_at":"2024-08-01T20:00:33.178Z","updated_at":"2024-11-07T20:31:30.321Z","avatar_url":"https://github.com/lisc55.png","language":"Python","readme":"# InfoGAN\n## InfoGAN Architecture \n\nTensorlayer implementation of [InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets](https://arxiv.org/abs/1606.03657).\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='img/architecture.svg' width=\"60%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\n## Results\n\n### MNIST\n\n#### Manipulating the First Continuous Latent Code\n\nChanging \u003cimg src=\"https://latex.codecogs.com/svg.latex?c_1\" title=\"c_1\" /\u003e will rotate the digits:\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./MNIST-wangchang/results/c1_res.png' width=\"60%\"\u003e\n\u003c/div\u003e\n\n#### Manipulating the Second Continuous Latent Code\n\nChanging \u003cimg src=\"https://latex.codecogs.com/svg.latex?c_2\" title=\"c_2\" /\u003e will change the width of the digits:\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./MNIST-wangchang/results/c2_res.png' width=\"60%\"\u003e\n\u003c/div\u003e\n\n#### Manipulating the Discrete Latent Code (Categorical)\n\nChanging \u003cimg src=\"https://latex.codecogs.com/svg.latex?d\" title=\"d\" /\u003e will change the type of digits:\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./MNIST-wangchang/results/cat_res.png' width=\"60%\"\u003e\n\u003c/div\u003e\n\n#### Random Generation and Loss Plot\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./MNIST-wangchang/results/random.png' width=\"60%\"\u003e\n\u003c/div\u003e\n\nG_loss increases steadily after a sufficient number of iterations, showing the discriminator is getting stronger and stronger and indicating the end of training.\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./MNIST-wangchang/results/loss.png' width=\"100%\"\u003e\n\u003c/div\u003e\n\n### CelebA\n\n#### Manipulating Discrete Latent Code\n\nAzimuth (pose):\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./CelebA-lishuchen/samples/Azimuth.png' width=\"80%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\nPresence or absence of glasses:\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./CelebA-lishuchen/samples/Glasses.png' width=\"80%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\nHair color:\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./CelebA-lishuchen/samples/Hair_color.png' width=\"80%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\nHair quantity:\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./CelebA-lishuchen/samples/Hair_quantity.png' width=\"80%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\nLighting:\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./CelebA-lishuchen/samples/Lighting.png' width=\"80%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\n### Faces\n\n#### Loss Plot\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./Faces-zhushenhan/results/loss.png' width=\"100%\"\u003e\n\u003c/div\u003e\n\n#### Azimuth\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./Faces-zhushenhan/results/Azimuth.png' width=\"75%\"\u003e\n\u003c/div\u003e\n\n#### Random Generation\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./Faces-zhushenhan/results/random.png' width=\"50%\"\u003e\n\u003c/div\u003e\n\n### Chairs\n\n#### Rotation\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./Chairs-yuepengyun/results/rotation.png' width=\"80%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src='./Chairs-yuepengyun/results/rotation2.png' width=\"80%\" height=\"50%\"\u003e\n\u003c/div\u003e\n\n## Run\n\n#### MNIST\n\n* Start training using ```python train.py```; this will automatically download the dataset.\n* To see the results, execute ```python test.py``` and **input the number of your saved model**.\n\n#### CelebA\n\n+ Set your image folder in `config.py`.\n+ Some links for the datasets:\n\t+ [CelebA](https://drive.google.com/drive/folders/0B7EVK8r0v71pWEZsZE9oNnFzTm8)\n+ Start training.\n\n```\npython train.py\n```\n\n#### Faces\n\n* Set your data folder in `config.py`.\n* A link for BFM 2009:\n\t* [Basel Face Model](https://faces.dmi.unibas.ch/bfm/main.php?nav=1-0\u0026id=basel_face_model). This should be downloaded before generating data.\n\t* Data is generated using the code in ```data_generator```. Call ```gen_data``` in MATLAB.\n* Start training using ```python train.py```.\n* To see the results, execute ```python test.py``` and **input the number of your saved model**.\n\n#### Chairs\n\n+ Set your image folder in `data.py`.\n+ Some links for the datasets:\n\t+ [Chairs](https://www.di.ens.fr/willow/research/seeing3Dchairs/)\n+ Start training.\n\n```\npython train.py\n```\n\n## References\n\n1. [InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets](https://arxiv.org/abs/1606.03657)\n2. [Large-scale CelebFaces Attributes (CelebA) Dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)\n3. [THE MNIST DATABASE of handwritten digits](http://yann.lecun.com/exdb/mnist/)\n4. [Seeing 3D chairs: exemplar part-based 2D-3D alignment using a large dataset of CAD models](https://www.di.ens.fr/willow/research/seeing3Dchairs/)\n\n## Authors\n\n+ [李舒辰 (@lisc55)](https://github.com/lisc55): The experiment on CelebA.\n+ [王畅 (@wangchang327)](https://github.com/wangchang327): The experiment on MNIST.\n+ [竺沈涵 (@zshCuanNi)](https://github.com/zshCuanNi): The experiment on Faces. Finished by [王畅 (@wangchang327)](https://github.com/wangchang327).\n+ [岳鹏云 (@hswd40)](https://github.com/hswd40): The experiment on Chairs.\n\n","funding_links":[],"categories":["4. GAN"],"sub_categories":["1.2 DatasetAPI and TFRecord Examples"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flisc55%2FInfoGAN","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flisc55%2FInfoGAN","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flisc55%2FInfoGAN/lists"}