Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/omerbt/Splice

Official Pytorch Implementation for "Splicing ViT Features for Semantic Appearance Transfer" presenting "Splice" (CVPR 2022 Oral)
https://github.com/omerbt/Splice

cvpr2022 generative-models image-translation single-image single-image-generation splice style-transfer vision-transformer

Last synced: about 2 months ago
JSON representation

Official Pytorch Implementation for "Splicing ViT Features for Semantic Appearance Transfer" presenting "Splice" (CVPR 2022 Oral)

Lists

README

        

# Splicing ViT Features for Semantic Appearance Transfer (CVPR 2022 - Oral)
## [Project Page]

[![arXiv](https://img.shields.io/badge/arXiv-Splice-b31b1b.svg)](http://arxiv.org/abs/2201.00424)
![Pytorch](https://img.shields.io/badge/PyTorch->=1.9.0-Red?logo=pytorch)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/omerbt/Splice/blob/master/Splice.ipynb)
![teaser](imgs/teaser.png)

**Splice** is a method for semantic appearance transfer, as described in Splicing ViT Features for Semantic Appearance Transfer (link to paper).

>Given two input images—a source structure image and a target appearance image–our method generates a new image in which
the structure of the source image is preserved, while the visual appearance of the target image is transferred in a semantically aware manner.
That is, objects in the structure image are “painted” with the visual appearance of semantically related objects in the appearance image.
Our method leverages a self-supervised, pre-trained ViT model as an external semantic prior. This allows us to train our generator only on
a single input image pair, without any additional information (e.g., segmentation/correspondences), and without adversarial training. Thus,
our framework can work across a variety of objects and scenes, and can generate high quality results in high resolution (e.g., HD).

## Getting Started
### Installation

```
git clone https://github.com/omerbt/Splice.git
pip install -r requirements.txt
```

### Run examples [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/omerbt/Splice/blob/master/Splice.ipynb)

Run the following command to start training
```bash
python train.py --dataroot datasets/splicing/cows
```
Intermediate results will be saved to `/out/output.png` during optimization. The frequency of saving intermediate results is indicated in the `save_epoch_freq` flag of the configuration.

## Sample Results
![plot](imgs/results.png)

## Citation
```
@inproceedings{tumanyan2022splicing,
title={Splicing ViT Features for Semantic Appearance Transfer},
author={Tumanyan, Narek and Bar-Tal, Omer and Bagon, Shai and Dekel, Tali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10748--10757},
year={2022}
}
```