{"id":13612602,"url":"https://github.com/tensorlayer/adaptive-style-transfer","last_synced_at":"2025-04-24T11:31:26.214Z","repository":{"id":41330470,"uuid":"146084522","full_name":"tensorlayer/adaptive-style-transfer","owner":"tensorlayer","description":"Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization","archived":false,"fork":false,"pushed_at":"2021-12-03T08:11:19.000Z","size":66122,"stargazers_count":115,"open_issues_count":6,"forks_count":29,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-04-03T04:13:04.793Z","etag":null,"topics":["deep-learning","deepdream","style-transfer","tensorflow","tensorlayer"],"latest_commit_sha":null,"homepage":"https://github.com/tensorlayer/tensorlayer","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorlayer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-08-25T09:57:38.000Z","updated_at":"2024-10-25T13:01:47.000Z","dependencies_parsed_at":"2022-09-10T19:40:42.209Z","dependency_job_id":null,"html_url":"https://github.com/tensorlayer/adaptive-style-transfer","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fadaptive-style-transfer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fadaptive-style-transfer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fadaptive-style-transfer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fadaptive-style-transfer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorlayer","download_url":"https://codeload.github.com/tensorlayer/adaptive-style-transfer/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250618198,"owners_count":21460042,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","deepdream","style-transfer","tensorflow","tensorlayer"],"created_at":"2024-08-01T20:00:32.290Z","updated_at":"2025-04-24T11:31:23.098Z","avatar_url":"https://github.com/tensorlayer.png","language":"Python","readme":"## Adaptive Style Transfer in TensorFlow and TensorLayer\n\n\u003e Update:\n\u003e - (15/05/2020) Migrated to TensorLayer2 (backend=TensorFlow 2.x). Original TL1 code can be found [here](https://github.com/tensorlayer/adaptive-style-transfer/tree/tl1).\n\nThis repository is implemented with [**TensorLayer2.0+**](https://github.com/tensorlayer/tensorlayer).\n\nBefore [\"Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization\"](https://arxiv.org/abs/1703.06868),\nthere were two main approaches for style transfer. First, given one content image and one style image, we randomly initialize a noise image and update it to get the output image. The drawback of this apporach is slow, it usually takes 3 mins to get one image.\nAfter that, academic proposed to train one model for one specific style, which input one image to network, and output one image. This approach is far more faster than the previous approach, and achieved real-time style transfer.\n\nHowever, one model for one style still not good enough for production. If a mobile APP want to support 100 styles offline, it is impossible to store 100 models in the cell phone. Adaptive style transfer which in turn supports arbitrary styles in one single model !!! We don't need to train new model for new style. Just simply input one content image and one style image you want !!!\n\n⚠️ ⚠️ **This repo will be moved into [here](https://github.com/tensorlayer/tensorlayer/tree/master/examples) (please star) for life-cycle management soon. More cool Computer Vision applications such as pose estimation and style transfer can be found in this [organization](https://github.com/tensorlayer).**\n\n\n### Usage\n\n1. Install TensorFlow and the master of TensorLayer:\n    ```\n    pip install git+https://github.com/tensorlayer/tensorlayer.git\n    ```\n\n2. You can use the  \u003cb\u003etrain.py\u003c/b\u003e script to train your own model. To train the model, you need to download [MSCOCO dataset](http://cocodataset.org/#download) and [Wikiart dataset](https://www.kaggle.com/c/painter-by-numbers), and put the dataset images under the \u003cb\u003e'dataset/content_samples'\u003c/b\u003e folder and \u003cb\u003e'dataset/style_samples'\u003c/b\u003e folder.\n\n3. You can then use the \u003cb\u003etest.py\u003c/b\u003e script to run your trained model. Remember to put it into the \u003cb\u003e'pretrained_models'\u003c/b\u003e folder and rename it to 'dec_best_weights.h5'. A pretrained model can be downloaded from [here](https://github.com/tensorlayer/adaptive-style-transfer/tree/tl1to2/pretrained_models), which is for TensorLayer v2 and a decoder using _DeConv2d_ layers.\n\n4. You may compare this TL2 version with its precedent TL1 version branch to learn about how to migrate TL1 samples. There are also plenty of comments in code tagged with 'TL1to2:' for your reference.\n\n\n### Results\n\nHere are some result images (Left to Right: Content , Style , Result):\n\n\u003cdiv align=\"center\"\u003e\n   \u003cimg src=\"./test_images/content/brad_pitt_01.jpg\" width=250 height=250\u003e\n   \u003cimg src=\"./test_images/style/cat.jpg\" width=250 height=250\u003e\n   \u003cimg src=\"./test_images/output/brad_pitt_01_cat.jpg\" width=250 height=250\u003e\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n   \u003cimg src=\"./test_images/content/000000532397.jpg\" width=250 height=250\u003e\n   \u003cimg src=\"./test_images/style/lion.jpg\" width=250 height=250\u003e\n   \u003cimg src=\"./test_images/output/000000532397_lion.jpg\" width=250 height=250\u003e\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n   \u003cimg src=\"./test_images/content/000000526781.jpg\" width=250 height=250\u003e\n   \u003cimg src=\"./test_images/style/216_01.jpg\" width=250 height=250\u003e\n   \u003cimg src=\"./test_images/output/000000526781_216_01.jpg\" width=250 height=250\u003e\n\u003c/div\u003e\n\nEnjoy !\n\n### Discussion\n\n- [TensorLayer Slack](https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc)\n- [TensorLayer WeChat](https://github.com/tensorlayer/tensorlayer-chinese/blob/master/docs/wechat_group.md)\n\n### License\n\n- This project is for academic use only.\n","funding_links":[],"categories":["Python","2. General Computer Vision","Style Transfer"],"sub_categories":["1.2 DatasetAPI and TFRecord Examples"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorlayer%2Fadaptive-style-transfer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorlayer%2Fadaptive-style-transfer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorlayer%2Fadaptive-style-transfer/lists"}