{"id":13732640,"url":"https://github.com/CDOTAD/AlphaGAN-Matting","last_synced_at":"2025-05-08T08:32:11.108Z","repository":{"id":167434848,"uuid":"158566570","full_name":"CDOTAD/AlphaGAN-Matting","owner":"CDOTAD","description":"This project is an unofficial implementation of AlphaGAN: Generative adversarial networks for natural image matting published at the BMVC 2018","archived":false,"fork":false,"pushed_at":"2020-07-19T14:41:14.000Z","size":34997,"stargazers_count":152,"open_issues_count":12,"forks_count":36,"subscribers_count":7,"default_branch":"master","last_synced_at":"2024-11-15T01:32:54.270Z","etag":null,"topics":["gan","matting","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CDOTAD.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-11-21T15:11:13.000Z","updated_at":"2024-01-29T05:53:03.000Z","dependencies_parsed_at":null,"dependency_job_id":"1e56d7c5-2e03-41a6-b697-97b9da0c8023","html_url":"https://github.com/CDOTAD/AlphaGAN-Matting","commit_stats":null,"previous_names":["cdotad/alphagan-matting"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CDOTAD%2FAlphaGAN-Matting","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CDOTAD%2FAlphaGAN-Matting/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CDOTAD%2FAlphaGAN-Matting/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CDOTAD%2FAlphaGAN-Matting/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CDOTAD","download_url":"https://codeload.github.com/CDOTAD/AlphaGAN-Matting/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253029154,"owners_count":21843032,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["gan","matting","pytorch"],"created_at":"2024-08-03T03:00:30.413Z","updated_at":"2025-05-08T08:32:10.404Z","avatar_url":"https://github.com/CDOTAD.png","language":"Python","readme":"# AlphaGAN\n\n![](https://img.shields.io/badge/python-3.6.5-brightgreen.svg) ![](https://img.shields.io/badge/pytorch-0.4.1-brightgreen.svg) ![](https://img.shields.io/badge/visdom-0.1.8.5-brightgreen.svg) ![](https://img.shields.io/badge/tqdm-4.28.1-brightgreen.svg) ![](https://img.shields.io/badge/opencv-3.3.1-brightgreen.svg)\n\nThis project is an unofficial implementation of [AlphaGAN: Generative adversarial networks for natural image matting](https://arxiv.org/pdf/1807.10088.pdf) published at the BMVC 2018. As for now, the result of my experiment is not as good as the paper's.\n\n# Dataset\n\n## Adobe Deep Image Matting Dataset\n\nFollow the [instruction](https://sites.google.com/view/deepimagematting) to contact the author for the dataset\n\nYou might need to follow the method mentioned in the **Deep Image Matting** to generate the trimap using the alpha mat.\n\nThe trimap are generated while the data are loaded.\n\n```python\nimport numpy as np\nimport cv2 as cv\n\ndef generate_trimap(alpha):\n   k_size = random.choice(range(2, 5))\n   iterations = np.random.randint(5, 15)\n   kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE, (k_size, k_size))\n   dilated = cv.dilate(alpha, kernel, iterations=iterations)\n   eroded = cv.erode(alpha, kernel, iterations=iterations)\n   trimap = np.zeros(alpha.shape, dtype=np.uint8)\n   trimap.fill(128)\n\n   trimap[eroded \u003e= 255] = 255\n   trimap[dilated \u003c= 0] = 0\n\n   return trimap\n```\nSee `scripts/MattingTrain.ipynb` and `scripts/MattingTest.ipynb` to compose the training/testing set.\n\nThe Dataset structure in my project\n\n```Bash\nTrain\n  ├── alpha  # the alpha ground-truth\n  ├── fg     # the foreground image\n  ├── merged_cv  # the real image composed by the fg \u0026 bg\nMSCOCO\n  ├── train2014 # the background image\n\n```\n# Running the Codes\n\n```shell\n   python train.py --dataroot ${YOUR_DIM_DATASET_ROOT} \\\n                     --training_file ${THE TRAINING FILE OF THE DIM DATASET}\n```\n\n# Differences from the original paper\n\n- SyncBatchNorm instead of pytorch original BatchNorm when use multi GPU.\n\n- Training batch_size = 1 [[1]](#ref1) [[2]](#ref2) \n\n- Using GroupNorm [[2]](#ref2)\n\n- Using Warmup [[3]](#ref3) [[4]](#ref4)\n\n# Records\n\n4 GPUS 32 batch size, and SyncBatchNorm\n- Achieved **SAD=78.22** after 21 epoches.\n\n1 GPU 1 batch size, and GroupNorm\n- Achieved [**SAD=68.61 MSE=0.03189**](https://drive.google.com/open?id=1yFRSjTNlAycmio8B-aibR7ZfYB9oZ-H3) after 33 epoches.\n- Achieved [**SAD=61.9 MSE=0.02878**](https://drive.google.com/open?id=1mICVWsQYGz3FrwiVZCnhsp56OAh-9coS) after xx epoches.\n\n# Results\n\n| image | trimap | alpha(predicted) |\n|:---:  | :--:   |      :---:       |\n|![](examples/images/beach-747750_1280_2.png)| ![](examples/trimaps/beach-747750_1280_2.png)| ![](result/beach-747750_1280_2.png)|\n|![](examples/images/boy-1518482_1920_9.png)| ![](examples/trimaps/boy-1518482_1920_9.png)| ![](result/boy-1518482_1920_9.png)|\n|![](examples/images/light-bulb-1104515_1280_3.png)|![](examples/trimaps/light-bulb-1104515_1280_3.png)|![](result/light-bulb-1104515_1280_3.png)|\n|![](examples/images/spring-289527_1920_15.png)|![](examples/trimaps/spring-289527_1920_15.png)|![](result/spring-289527_1920_15.png)|\n|![](examples/images/wedding-dresses-1486260_1280_3.png)|![](examples/trimaps/wedding-dresses-1486260_1280_3.png)|![](result/wedding-dresses-1486260_1280_3.png)|\n\n\n\n# Acknowledgments\n\nMy code is inspired by:\n\n- \u003cspan id=\"ref1\"\u003e\u003c/span\u003e  [1] [pytorch-deep-image-matting](https://github.com/huochaitiantang/pytorch-deep-image-matting)\n\n- \u003cspan id=\"ref2\"\u003e\u003c/span\u003e [2] [FBA-Matting](https://github.com/MarcoForte/FBA-Matting)\n\n- \u003cspan id=\"ref3\"\u003e\u003c/span\u003e [3] [GCA-Matting](https://github.com/MarcoForte/FBA-Matting)\n\n- \u003cspan id=\"ref4\"\u003e\u003c/span\u003e [4] [reid-strong-baseline](https://github.com/michuanhaohao/reid-strong-baseline)\n\n- [pytorch-CycleGAN-and-pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)\n\n- [pytorch-book](https://github.com/chenyuntc/pytorch-book) chapter7 generate anime head portrait with GAN\n\n- [pytorch-deeplab-xception](https://github.com/jfzhang95/pytorch-deeplab-xception)\n\n- [indexnet_matting](https://github.com/poppinace/indexnet_matting)\n","funding_links":[],"categories":["Background subtraction"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCDOTAD%2FAlphaGAN-Matting","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FCDOTAD%2FAlphaGAN-Matting","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCDOTAD%2FAlphaGAN-Matting/lists"}