Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Yutong-Zhou-cv/Awesome-AI-in-Beauty-Industry
A Survey on AI in the beauty industry.
https://github.com/Yutong-Zhou-cv/Awesome-AI-in-Beauty-Industry
List: Awesome-AI-in-Beauty-Industry
awesome-list beauty-industry papers survey
Last synced: about 12 hours ago
JSON representation
A Survey on AI in the beauty industry.
- Host: GitHub
- URL: https://github.com/Yutong-Zhou-cv/Awesome-AI-in-Beauty-Industry
- Owner: Yutong-Zhou-cv
- Created: 2021-05-16T10:30:13.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2023-09-05T03:48:13.000Z (about 1 year ago)
- Last Synced: 2024-05-19T22:45:14.472Z (6 months ago)
- Topics: awesome-list, beauty-industry, papers, survey
- Homepage:
- Size: 34.2 KB
- Stars: 23
- Watchers: 5
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-artificial-intelligence - Awesome AI in Beauty Industry - A collection of resources on AI in the beauty and cosmetics industry. (Other awesome AI lists)
README
#
Awesome AI in Beauty Industry💄
[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
A collection of resources on AI in the beauty and cosmetics industry.
## *Content*
* - [ ] [1. Description](#head1)* - [ ] [2. Quantitative Evaluation Metrics](#head2)
* - [ ] [3. Datasets](#head3)* - [ ] [4. Paper With Code](#head4)
* - [ ] [Survey](#head-Survey)
* - [ ] [2023](#head-2023)
* - [ ] [2022](#head-2022)
* - [ ] [2021](#head-2021)
* - [ ] [2020](#head-2020)
* - [ ] [2019](#head-2019)
* - [ ] [2018](#head-2018)* [*Contact Me*](#head5)
## *1.Description*
## *2.Quantitative Evaluation Metrics*
## *3.Datasets*
(Coming soon.)## *4.Paper With Code*
* **Survey**
* **2023**
* (ACMMM 2023) [💬Fashion Synthesis] **SGDiff: A Style Guided Diffusion Model for Fashion Synthesis**, Zhengwentai Sun et al. [[Paper](https://arxiv.org/abs/2308.07605)]
* (arXiv preprint 2023) [💬Fashion Design] **DiffFashion: Reference-based Fashion Design with Structure-aware Transfer by Diffusion Models**, Shidong Cao et al. [[Paper](https://arxiv.org/abs/2302.06826)] [[Code](https://github.com/Rem105-210/DiffFashion)]* **2022**
* (Knowledge-Based Systems) [💬Makeup Transfer] **TSEV-GAN: Generative Adversarial Networks with Target-aware Style Encoding and Verification for facial makeup transfer**, Zhen Xu et al. [[Paper](https://www.sciencedirect.com/science/article/pii/S0950705122010516)]
* (ECCV 2022) [💬Hairstyle Transfer] **Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle Transfer via Local-Style-Aware Hair Alignment**, Taewoo Kim et al. [[Paper](https://arxiv.org/abs/2208.07765)] [[Code](https://github.com/Taeu/Style-Your-Hair)]
* (ECCV 2022) [💬Makeup Transfer] **EleGANt: Exquisite and Locally Editable GAN for Makeup Transfer**, Chenyu Yang et al. [[Paper](https://arxiv.org/abs/2207.09840)] [[Code](https://github.com/chenyu-yang-2000/elegant)]
* (CVPR 2022) [💬Retouching] **ABPN: Adaptive Blend Pyramid Network for Real-Time Local Retouching of Ultra High-Resolution Photo**, Biwen Lei et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Lei_ABPN_Adaptive_Blend_Pyramid_Network_for_Real-Time_Local_Retouching_of_CVPR_2022_paper.pdf)] [[Models](https://www.modelscope.cn/models/damo/cv_unet_skin-retouching/summary)]
* (CVPR 2022) [💬Makeup Transfer & Protecting Facial Privacy] **Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer**, Shengshan Hu et al. [[Paper](https://arxiv.org/abs/2203.03121)]
* (AAAI 2022) [💬Makeup Transfer & Removal] **SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer and Removal**, Zhaoyang Sun et al. [[Paper](https://arxiv.org/abs/2112.03631)] [[Code](https://github.com/Snowfallingplum/SSAT)]* **2021**
* (TPAMI 2021) [💬Makeup Transfer & Removal] **PSGAN++: Robust Detail-Preserving Makeup Transfer and Removal**, Si Liu et al. [[Paper](https://ieeexplore.ieee.org/abstract/document/9440729)]
* (CVPR 2021) [💬Makeup Transfer] **Spatially-invariant Style-codes Controlled Makeup Transfer**, Han Deng et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Deng_Spatially-Invariant_Style-Codes_Controlled_Makeup_Transfer_CVPR_2021_paper.pdf)] [[Code](https://github.com/makeuptransfer/SCGAN)]
* (CVPR 2021 [AI for Content Creation Workshop](http://visual.cs.brown.edu/workshops/aicc2021/)) [💬Makeup Transfer for video] **Deep Graphics Encoder for Real-Time Video Makeup Synthesis from Example**, Robin Kips et al. [[Paper](https://openaccess.thecvf.com/content/CVPR2021W/CVFAD/papers/Kips_Deep_Graphics_Encoder_for_Real-Time_Video_Makeup_Synthesis_From_Example_CVPRW_2021_paper.pdf)]
* (CVPR 2021) [💬Makeup Transfer] **Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup Transfer**, Thao Nguyen et al. [[Paper](https://arxiv.org/pdf/2104.01867.pdf)] [[Code](https://github.com/VinAIResearch/CPM)]
* (IJCAI 2021) [💬Makeup Generation] **Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition**, Bangjie Yin et al. [[Paper](https://arxiv.org/pdf/2105.03162.pdf)]* **2020**
* (IJCAI 2020) [💬Makeup Transfer] **Real-World Automatic Makeup via Identity Preservation Makeup Net**, Zhikun Huang et al. [[Paper](https://www.ijcai.org/proceedings/2020/91)] [[Code](https://github.com/huangzhikun1995/IPM-Net)]
* (CVPR 2020) [💬Makeup Transfer] **PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer**, Wentao Jiang et al. [[Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Jiang_PSGAN_Pose_and_Expression_Robust_Spatial-Aware_GAN_for_Customizable_Makeup_CVPR_2020_paper.pdf)] [[Code](https://github.com/wtjiang98/PSGAN)]* **2019**
* (arXiv preprint 2019) [💬Makeup Transfer] **Disentangled Makeup Transfer with Generative Adversarial Network**, Honglun Zhang et al. [[Paper](https://arxiv.org/pdf/1907.01144.pdf)] [[Code](https://github.com/Honlan/DMT)]
* **2018**## *Contact Me*
* [Yutong ZHOU](https://github.com/Yutong-Zhou-cv) in [Interaction Laboratory, Ritsumeikan University.](https://github.com/Rits-Interaction-Laboratory) ヽ( ̄ω ̄( ̄ω ̄〃)ゝ
* If you have any question, please feel free to contact Yutong ZHOU (E-mail: ).