{"id":7743428,"url":"https://github.com/hzwer/ICCV2019-LearningToPaint","last_synced_at":"2025-07-16T05:31:30.004Z","repository":{"id":42006363,"uuid":"174922473","full_name":"hzwer/ICCV2019-LearningToPaint","owner":"hzwer","description":"ICCV2019 - Learning to Paint With Model-based Deep Reinforcement Learning","archived":false,"fork":false,"pushed_at":"2025-05-07T03:16:13.000Z","size":19120,"stargazers_count":2281,"open_issues_count":0,"forks_count":313,"subscribers_count":46,"default_branch":"master","last_synced_at":"2025-07-09T12:04:33.640Z","etag":null,"topics":["computer-vision","deep-learning","painting","pytorch","reinforcement-learning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hzwer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2019-03-11T03:56:03.000Z","updated_at":"2025-06-27T01:14:28.000Z","dependencies_parsed_at":"2025-05-15T12:06:45.537Z","dependency_job_id":"5289d8ca-798d-4fc9-b2a6-0ea3674db48d","html_url":"https://github.com/hzwer/ICCV2019-LearningToPaint","commit_stats":null,"previous_names":["hzwer/iccv2019-learningtopaint","megvii-research/iccv2019-learningtopaint"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/hzwer/ICCV2019-LearningToPaint","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hzwer%2FICCV2019-LearningToPaint","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hzwer%2FICCV2019-LearningToPaint/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hzwer%2FICCV2019-LearningToPaint/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hzwer%2FICCV2019-LearningToPaint/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hzwer","download_url":"https://codeload.github.com/hzwer/ICCV2019-LearningToPaint/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hzwer%2FICCV2019-LearningToPaint/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":265464029,"owners_count":23770315,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","deep-learning","painting","pytorch","reinforcement-learning"],"created_at":"2024-04-11T02:10:29.773Z","updated_at":"2025-07-16T05:31:29.995Z","avatar_url":"https://github.com/hzwer.png","language":"Python","readme":"# ICCV2019-Learning to Paint\n\n## [arXiv](https://arxiv.org/abs/1903.04411) | [YouTube](https://youtu.be/YmOgKZ5oipk) | [Reddit](https://www.reddit.com/r/reinforcementlearning/comments/b5lpfl/learning_to_paint_with_modelbased_deep/) | [Slide(中文)](https://docs.google.com/presentation/d/1itHk_yI8847wx-meH9k0v_8dNZS2dD0p/edit?usp=sharing\u0026ouid=101528516762521089540\u0026rtpof=true\u0026sd=true) | [DeepWiki](https://deepwiki.com/hzwer/ICCV2019-LearningToPaint) | [Replicate](https://replicate.ai/hzwer/iccv2019-learningtopaint)\n[Zhewei Huang](https://scholar.google.com/citations?user=zJEkaG8AAAAJ\u0026hl=zh-CN\u0026oi=sra), Wen Heng, [Shuchang Zhou](https://scholar.google.com/citations?user=zYI0rysAAAAJ\u0026hl=zh-CN\u0026oi=sra)\n\n## Abstract\n\nWe show how to teach machines to paint like human painters, who can use a\nsmall number of strokes to create fantastic paintings. By employing a neural\nrenderer in model-based Deep Reinforcement Learning (DRL), our agents learn to\ndetermine the position and color of each stroke and make long-term plans to\ndecompose texture-rich images into strokes. Experiments demonstrate that\nexcellent visual effects can be achieved using hundreds of strokes. The\ntraining process does not require the experience of human painters or stroke\ntracking data. \n\n**You can easily use [colaboratory](https://colab.research.google.com/github/hzwer/LearningToPaint/blob/master/LearningToPaint.ipynb) to have a try.**\n\n![Demo](./demo/lisa.gif)![Demo](./demo/sunrise.gif)![Demo](./demo/sunflower.gif)\n![Demo](./demo/palacemuseum.gif)![Demo](./demo/deepdream_night.gif)![Demo](./demo/deepdream_bird.gif)\n\n### Dependencies\n* [PyTorch](http://pytorch.org/) 1.1.0 \n* [tensorboardX](https://github.com/lanpa/tensorboard-pytorch/tree/master/tensorboardX)\n* [opencv-python](https://pypi.org/project/opencv-python/) 3.4.0\n```\npip3 install torch==1.1.0\npip3 install tensorboardX\npip3 install opencv-python\n```\n\n## Testing\nMake sure there are renderer.pkl and actor.pkl before testing.\n\nYou can download a trained neural renderer and a CelebA actor for test: [renderer.pkl](https://drive.google.com/open?id=1-7dVdjCIZIxh8hHJnGTK-RA1-jL1tor4) and [actor.pkl](https://drive.google.com/open?id=1a3vpKgjCVXHON4P7wodqhCgCMPgg1KeR)\n\n```\n$ wget \"https://drive.google.com/uc?export=download\u0026id=1-7dVdjCIZIxh8hHJnGTK-RA1-jL1tor4\" -O renderer.pkl\n$ wget \"https://drive.google.com/uc?export=download\u0026id=1a3vpKgjCVXHON4P7wodqhCgCMPgg1KeR\" -O actor.pkl\n$ python3 baseline/test.py --max_step=100 --actor=actor.pkl --renderer=renderer.pkl --img=image/test.png --divide=4\n$ ffmpeg -r 10 -f image2 -i output/generated%d.png -s 512x512 -c:v libx264 -pix_fmt yuv420p video.mp4 -q:v 0 -q:a 0\n(make a painting process video)\n```\n\nWe also provide with some other neural renderers and agents, you can use them instead of renderer.pkl to train the agent:\n\n[triangle.pkl](https://drive.google.com/open?id=1YefdnTuKlvowCCo1zxHTwVJ2GlBme_eE) --- [actor_triangle.pkl](https://drive.google.com/open?id=1k8cgh3tF7hKFk-IOZrgsUwlTVE3CbcPF);\n\n[round.pkl](https://drive.google.com/open?id=1kI4yXQ7IrNTfjFs2VL7IBBL_JJwkW6rl) --- [actor_round.pkl](https://drive.google.com/open?id=1ewDErUhPeGsEcH8E5a2QAcUBECeaUTZe);\n\n[bezierwotrans.pkl](https://drive.google.com/open?id=1XUdti00mPRh1-1iU66Uqg4qyMKk4OL19) --- [actor_notrans.pkl](https://drive.google.com/open?id=1VBtesw2rHmYu2AeJ22XvTCuzuqkY8hZh)\n\nWe also provide 百度网盘 source. 链接: https://pan.baidu.com/s/1GELBQCeYojPOBZIwGOKNmA 提取码: aq8n \n## Training\n\n### Datasets\nDownload the [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset and put the aligned images in data/img_align_celeba/\\*\\*\\*\\*\\*\\*.jpg\n\n### Neural Renderer\nTo create a differentiable painting environment, we need train the neural renderer firstly. \n\n```\n$ python3 baseline/train_renderer.py\n$ tensorboard --logdir train_log --port=6006\n(The training process will be shown at http://127.0.0.1:6006)\n```\n\n### Paint Agent\nAfter the neural renderer looks good enough, we can begin training the agent.\n```\n$ cd baseline\n$ python3 train.py --max_step=40 --debug --batch_size=96\n(A step contains 5 strokes in default.)\n$ tensorboard --logdir train_log --port=6006\n```\n\n## Resources\n[量子位报道](https://zhuanlan.zhihu.com/p/64097633)\n\n[Learning to Paint：一个绘画 AI](https://zhuanlan.zhihu.com/p/61761901)\n\n[旷视研究院推出基于深度强化学习的绘画智能体](https://zhuanlan.zhihu.com/p/80732065)\n\n* Our ICCV poster\n  \u003cdiv\u003e\n  \u003cimg src=\"./image/poster.png\" width=\"800\"\u003e\n  \u003c/div\u003e\n* [Our ICCV rebuttal for reviewers](https://drive.google.com/file/d/1bEBS-uxmVEc7WVuX35NCodxDu17s_d8m/view?usp=sharing)\n## Contributors\n- [hzwer](https://github.com/hzwer)\n- [ak9250](https://github.com/ak9250)\n\nAlso many thanks to [ctmakro](https://github.com/ctmakro/rl-painter) for inspiring this work. He also explored using greedy algorithm to generate paintings - [opencv_playground](https://github.com/ctmakro/opencv_playground).\n\nIf you find this repository useful for your research, please cite the following paper:\n```\n@inproceedings{huang2019learning,\n  title={Learning to paint with model-based deep reinforcement learning},\n  author={Huang, Zhewei and Heng, Wen and Zhou, Shuchang},\n  booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},\n  year={2019}\n}\n```\n","funding_links":[],"categories":["Others"],"sub_categories":["Interpretation"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhzwer%2FICCV2019-LearningToPaint","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhzwer%2FICCV2019-LearningToPaint","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhzwer%2FICCV2019-LearningToPaint/lists"}