https://github.com/omerbt/splice
Official Pytorch Implementation for "Splicing ViT Features for Semantic Appearance Transfer" presenting "Splice" (CVPR 2022 Oral)
https://github.com/omerbt/splice
cvpr2022 generative-models image-translation single-image single-image-generation splice style-transfer vision-transformer
Last synced: about 2 months ago
JSON representation
Official Pytorch Implementation for "Splicing ViT Features for Semantic Appearance Transfer" presenting "Splice" (CVPR 2022 Oral)
- Host: GitHub
- URL: https://github.com/omerbt/splice
- Owner: omerbt
- Created: 2021-08-15T14:38:23.000Z (almost 4 years ago)
- Default Branch: master
- Last Pushed: 2023-11-21T11:52:55.000Z (over 1 year ago)
- Last Synced: 2025-03-29T21:04:34.502Z (about 2 months ago)
- Topics: cvpr2022, generative-models, image-translation, single-image, single-image-generation, splice, style-transfer, vision-transformer
- Language: Jupyter Notebook
- Homepage: https://splice-vit.github.io/
- Size: 206 MB
- Stars: 382
- Watchers: 9
- Forks: 32
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Splicing ViT Features for Semantic Appearance Transfer (CVPR 2022 - Oral)
## [Project Page][](http://arxiv.org/abs/2201.00424)

[](https://colab.research.google.com/github/omerbt/Splice/blob/master/Splice.ipynb)
**Splice** is a method for semantic appearance transfer, as described in Splicing ViT Features for Semantic Appearance Transfer (link to paper).
>Given two input images—a source structure image and a target appearance image–our method generates a new image in which
the structure of the source image is preserved, while the visual appearance of the target image is transferred in a semantically aware manner.
That is, objects in the structure image are “painted” with the visual appearance of semantically related objects in the appearance image.
Our method leverages a self-supervised, pre-trained ViT model as an external semantic prior. This allows us to train our generator only on
a single input image pair, without any additional information (e.g., segmentation/correspondences), and without adversarial training. Thus,
our framework can work across a variety of objects and scenes, and can generate high quality results in high resolution (e.g., HD).## Getting Started
### Installation```
git clone https://github.com/omerbt/Splice.git
pip install -r requirements.txt
```### Run examples [](https://colab.research.google.com/github/omerbt/Splice/blob/master/Splice.ipynb)
Run the following command to start training
```bash
python train.py --dataroot datasets/splicing/cows
```
Intermediate results will be saved to `/out/output.png` during optimization. The frequency of saving intermediate results is indicated in the `save_epoch_freq` flag of the configuration.## Sample Results
## Citation
```
@inproceedings{tumanyan2022splicing,
title={Splicing ViT Features for Semantic Appearance Transfer},
author={Tumanyan, Narek and Bar-Tal, Omer and Bagon, Shai and Dekel, Tali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10748--10757},
year={2022}
}
```