{"id":18946869,"url":"https://github.com/sws-5007/portrait-stylization_python","last_synced_at":"2025-09-11T09:39:51.725Z","repository":{"id":203790417,"uuid":"710415387","full_name":"SWS-5007/Portrait-Stylization_Python","owner":"SWS-5007","description":null,"archived":false,"fork":false,"pushed_at":"2023-10-26T16:44:13.000Z","size":25055,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-05-25T13:48:31.851Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SWS-5007.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2023-10-26T16:36:38.000Z","updated_at":"2023-10-26T17:40:44.000Z","dependencies_parsed_at":null,"dependency_job_id":"54dabc62-5b28-4ce2-b023-aa9bb6630184","html_url":"https://github.com/SWS-5007/Portrait-Stylization_Python","commit_stats":null,"previous_names":["sws-5007/portrait-stylization_python"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/SWS-5007/Portrait-Stylization_Python","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SWS-5007%2FPortrait-Stylization_Python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SWS-5007%2FPortrait-Stylization_Python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SWS-5007%2FPortrait-Stylization_Python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SWS-5007%2FPortrait-Stylization_Python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SWS-5007","download_url":"https://codeload.github.com/SWS-5007/Portrait-Stylization_Python/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SWS-5007%2FPortrait-Stylization_Python/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274609590,"owners_count":25316653,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-11T02:00:13.660Z","response_time":74,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-08T13:08:14.003Z","updated_at":"2025-09-11T09:39:51.699Z","avatar_url":"https://github.com/SWS-5007.png","language":"Python","readme":"![](https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/portrait_stylization_banner.png?raw=true)\n\n[![arXiv](https://img.shields.io/badge/arXiv-1508.06576-b31b1b.svg?style=for-the-badge)](https://arxiv.org/abs/1508.06576)\n[![Open with Colab](https://img.shields.io/badge/Open_In_Colab-0?style=for-the-badge\u0026logo=GoogleColab\u0026color=525252)](https://colab.research.google.com/github/thiagoambiel/PortraitStylization/blob/colab/notebooks/PortraitStylization_Demo.ipynb)\n\nBased on the improvements of [Katherine Crowson (Neural style transfer in PyTorch)](https://github.com/crowsonkb/style-transfer-pytorch) \non the paper [A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576),\nThis repository brings some changes to enhance the style transfer results on images that contains human faces.\n\n# How it Works?\n![](https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/stylization_diagram.png?raw=true)\n\u003cp align=\"center\"\u003e\n  \u003cb\u003eFigure 1:\u003c/b\u003e Stylization Diagram.\n\u003c/p\u003e\n\nFirst we pass the original image to the [MODNet Human Segmentation Model](#modnet-background-removal)\nto generate an alpha layer and remove the background. Then the image\nis forwarded through the [FaceNet](#facial-identification-loss), \n[FaceMesh](#facial-meshes-loss) and **VGG19** Models for feature extraction.\nThe **VGG19** Model is also used to extract the features from desired style images.\nAnd finally, all the extracted features are joined to create the **Content**, \n**Facial** and **Style** losses.\n\n## Facial Identification Loss\nUsing [FaceNet Inception Resnet Model (Tim Esler)](https://github.com/timesler/facenet-pytorch), with VGGFace2 Pretrained weights, We can implement a **FaceID Loss**,\ncomparing the internal representations of the model from content image and result image with **MSE (Mean Squared Error)**,\nlike in the *VGG Model Content Loss*.\n\nThe FaceID Loss weight can be controlled through `face_weight` argument.\n\n## Facial Meshes Loss\nThe **FaceMesh Loss** works like the **FaceID Loss**, but it uses only the last layer output\nfrom [FaceMesh Model (George Grigorev)](https://github.com/thepowerfuldeez/facemesh.pytorch)\nthat represents the **Facial 3D Meshes**. Use only for fine adjustments on relevant expression\nattributes like mouth opening.\n\nThe FaceMesh Loss weight can be controlled through `face_mesh` argument.\n\n## MODNet Background Removal\n\nThe `BackgroundRemoval` class uses [MODNet: Trimap-Free Portrait Matting in Real Time](https://github.com/ZHKKKe/MODNet)\nas backend, and uses human matting pretrained weights provided by the author. \nYou can remove the background from input image for better results, but it's not necessary.\n\nThe MODNet Model can be used through the [`remove_bg.py`](#background-removal) script and the [`BackgroundRemoval`](#load-as-module) class.\n\n## Experiments\nThe `content_weight` parameter focus on general image content,\nwhile the `face_weight` parameter focus on specific facial attributes.\n`mesh_weight` is helpful when adjusting finer details on result faces.\n\n**Note**: `crop_faces` parameter is set to **False** in all the experiments below. \n\n\u003ctable\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/experiments/all_models_default_params.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/experiments/w_o_vgg_model_default_params.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/experiments/w_o_facenet_model_default_params.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/experiments/w_o_facemesh_model_default_params.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003econtent_weight\u003c/b\u003e: 0.05\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003eface_weight\u003c/b\u003e: 0.25\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003emesh_weight\u003c/b\u003e: 0.015\u003c/nobr\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003econtent_weight\u003c/b\u003e: 0.0\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003eface_weight\u003c/b\u003e: 0.25\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003emesh_weight\u003c/b\u003e: 0.015\u003c/nobr\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003econtent_weight\u003c/b\u003e: 0.05\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003eface_weight\u003c/b\u003e: 0.0\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003emesh_weight\u003c/b\u003e: 0.015\u003c/nobr\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003econtent_weight\u003c/b\u003e: 0.05\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003eface_weight\u003c/b\u003e: 0.25\u003c/nobr\u003e\u003c/li\u003e\n          \u003cli\u003e\u003cnobr\u003e\u003cb\u003emesh_weight\u003c/b\u003e: 0.0\u003c/nobr\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n## Installation\n\nHere you can find instructions to install the project through conda.\nWe'll create a new environment, and install the required dependencies.\n\nFirst, clone the repository locally:\n```bash\ngit clone https://github.com/thiagoambiel/PortraitStylization.git\ncd PortraitStylization\n```\n\nCreate a new conda environment called `portrait_stylization`:\n```bash\nconda create -n portrait_stylization python=3.7\nconda activate portrait_stylization\n```\n\nNow install the project dependencies:\n```bash\nconda install pytorch torchvision cudatoolkit=10.2 -c pytorch\npip install -r requirements.txt\n```\n\n## Basic Usage\nYou can use the CLI tools `remove_bg.py` and `stylize.py` or import the classes \n`BackgroundRemoval` and `StyleTransfer`. Both methods will download the \n**VGG19 Weights (548 MB)** and **FaceNet VGGFace2 Weights (107 MB)**\nat the first run.\n\nInput images will be converted to sRGB when loaded, and output images have the sRGB color space.\nAlpha channels in the inputs will be ignored.\n\n### Load as Module\nThe `StyleTransfer` and `BackgroundRemoval` classes can be used on an interactive\npython session or in a common script.  \n```python\nfrom PIL import Image\nfrom style_transfer import StyleTransfer\nfrom remove_bg import BackgroundRemoval\n\n# Load content image.\noriginal_image = Image.open(\"content.jpg\")\n\n# Load MODNet and remove content image background.\nbackground_removal = BackgroundRemoval(\"./weights/modnet.pth\", device=\"cpu\")\n\ncontent_image = background_removal.remove_background(\n    img=original_image,\n    bg_color=\"black\",\n)\n\n# Load style images.\nstyle_images = [\n    Image.open(\"style_1.jpg\"), \n    Image.open(\"style_2.jpg\"),\n]\n\n# Load and run style transfer module.\nst = StyleTransfer(device=\"cpu\", pooling=\"max\")\n\nresult_image = st.stylize(\n    content_image=content_image, \n    style_images=style_images,\n    content_weight=0.05,\n    face_weight=0.25,\n)\n\n# Save result to disk.\nresult_image.save(\"out.png\")\n```\n\n### Run from Command Line\n\n+ ### Background Removal\n```bash\npython remove_bg.py content.jpg output.jpg \n```\n\n+ `content.jpg`: The input image to remove the background.\n\n\n+ `output.jpg`: The path to save the result image.\n\nYou can run `python remove_bg.py --help` for more info.\n\n+ ### Style Transfer\n```bash\npython stylize.py input.jpg style_1.jpg style_2.jpg -o out.png -cw 0.05 -fw 0.25\n```\n+ `input.jpg`: The input image to be stylized.\n\n\n+ `style_N.jpg`: The style images that will be used to stylize the content\nimage, need to be at least one.\n\n\n+ `-o out.png` (`--save-path`): The path to save the result image.\n\n\n+ `-cw 0.05` (`--content-weight`): The **Content Loss** weight. It defines how similar \nthe content of result image will be when compared with the content image.\n\n\n+ `-fw 0.25` (`--face-weight`): The **FaceID Loss** weight. It defines how similar\nthe detected faces in result image will be when compared with the content image.\n\n\nYou can run `python stylize.py --help` for more info.\n\n## Example Results\n\n**Note**: It also works with multiple faces in the same image.\n\n\u003ctable\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003cth\u003eContent\u003c/th\u003e\n      \u003cth\u003eStyle\u003c/th\u003e\n      \u003cth\u003eResult\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/original/example1.jpeg?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/styles/eletricity.jpg?raw=true\" height=\"145\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/results/example1.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/original/example2.jpeg?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/styles/abstract2.jpg?raw=true\" height=\"145\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/results/example2.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/original/example3.jpeg?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/styles/oil_painting_couple.jpeg?raw=true\" height=\"145\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/results/example3.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/original/example4.jpeg?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/styles/abstract.jpg?raw=true\" height=\"145\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/results/example4.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/original/example5.jpeg?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/styles/abstract3.jpg?raw=true\" height=\"145\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/results/example5.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/original/example6.jpeg?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/styles/flames.jpg?raw=true\" height=\"145\"/\u003e\u003c/th\u003e\n      \u003cth\u003e\u003cimg src=\"https://github.com/thiagoambiel/PortraitStylization/blob/colab/assets/examples/results/example6.png?raw=true\" width=\"150\"/\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n## Acknowledgements\nThanks to the authors of these amazing projects.\n+ [https://github.com/crowsonkb/style-transfer-pytorch](https://github.com/crowsonkb/style-transfer-pytorch)\n+ [https://github.com/timesler/facenet-pytorch](https://github.com/timesler/facenet-pytorch)\n+ [https://github.com/thepowerfuldeez/facemesh.pytorch](https://github.com/thepowerfuldeez/facemesh.pytorch) \n+ [https://github.com/ZHKKKe/MODNet](https://github.com/ZHKKKe/MODNet)\n\n## License\nPortraitStylization is released under the MIT license. Please see the [LICENSE](https://github.com/thiagoambiel/PortraitStylization/blob/main/LICENSE) file for more information.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsws-5007%2Fportrait-stylization_python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsws-5007%2Fportrait-stylization_python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsws-5007%2Fportrait-stylization_python/lists"}