Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yael-vinker/live_sketch
https://github.com/yael-vinker/live_sketch
Last synced: 13 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/yael-vinker/live_sketch
- Owner: yael-vinker
- License: apache-2.0
- Created: 2023-11-21T07:27:55.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2023-12-08T11:29:53.000Z (11 months ago)
- Last Synced: 2024-08-01T18:41:27.396Z (3 months ago)
- Language: Python
- Size: 479 KB
- Stars: 372
- Watchers: 34
- Forks: 22
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
# Breathing Life Into Sketches Using Text-to-Video Priors
> **Breathing Life Into Sketches Using Text-to-Video Priors**
>
Rinon Gal,
Yael Vinker,
Yuval Alaluf,
Amit Bermano,
Daniel Cohen-Or,
Ariel Shamir
Gal Chechik
>
> Given a still sketch in vector format and a text prompt describing a desired action, our method automatically animates the drawing with respect to the prompt# Setup
```
git clone https://github.com/yael-vinker/live_sketch.git
cd live_sketch
```## Environment
To set up our environment, please run:
```
conda env create -f environment.yml
```
Next, you need to install diffvg:
```bash
git clone https://github.com/BachiLi/diffvg.git
cd diffvg
git submodule update --init --recursive
python setup.py install
```# Input Sketch :writing_hand:
The input sketches should be provided in SVG format, and follow the recommended settings as described in the paper.
You can generate your sketches with automatic tools or manually, as long as they follow the required format and can be processed with [diffvg](https://github.com/BachiLi/diffvg).
Most of our sketches were generated using [CLIPasso](https://clipasso.github.io/clipasso/), which is an image-to-sketch method that produces sketches in vector format.
If you don't have your sketch already, you can generate a sketch using CLIPasso, by providing it an image, or even using existing [text-to-image](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) tools to generate an image, and then feed it to CLIPasso.
To better adjust the input sketch to our reccomended settings, we provide a preprocessing script. To use it run:
```bash
python preprocess_sketch.py --target
```
Your new sketch will be saved to "svg_input/\_scaled".
Please make sure to open this file locally to verify you are satisfied with the changes.**Recommended sketch format:**
* Rendering size: 256x256
* Number of strokes: 16
* Strokes type: cubic Bezier curves
* Stroke width: 1.5**Other Sketch Resources**
Alternative tools for automatic sketch generation: [CLIPascene](https://clipascene.github.io/CLIPascene/), [DiffSketcher](https://github.com/ximinng/DiffSketcher), [VectorFusion](https://ajayj.com/vectorfusion/).
Sketches from sketch Datasets: [TU-Berlin](https://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/), [Creative Sketch](https://github.com/facebookresearch/DoodlerGAN), [Rough Sketch Benchmark](https://cragl.cs.gmu.edu/sketchbench/)**Example Sketches and Videos**
We have provided many example sketches in SVG format under "svg_input/".# Generate a Video! :woman_artist: :art:
To animate your sketch, run the following command:
```
CUDA_VISIBLE_DEVICES=0 python animate_svg.py \
--target "svg_input/" \
--caption \
--output_folder \
```
For example:
```
CUDA_VISIBLE_DEVICES=0 python animate_svg.py \
--target "svg_input/surfer0_scaled1" \
--caption "A surfer riding and maneuvering on waves on a surfboard." \
--output_folder surfer0_scaled1 \
```
The output video will be saved to "output_videos/".
The output includes the network's weights, SVG frame logs and their rendered .mp4 files (under svg_logs and mp4_logs respectively). At the end of training, we also output a high quality gif render of the last iteration (HG_gif.gif).You can use these additional arguments to modify the default settings:
* ```--num_frames``` (default is 24)
* ```--num_iter``` (default is 1000)
* ```--save_vid_iter``` (default is 100)
* ```--lr_local``` (defaults is 0.005)
* ```--lr_base_global``` (default is 0.0001)
* ```--seed``` you can change the seed to produce differnet videos
You can also control the strength of global motions using the following argumanets:
* ```--rotation_weight``` should be between 0 and 1 (keep small)
* ```--scale_weight```
* ```--shear_weight```
* ```--translation_weight``` we recommend to not use a value larger than 3We provide an example run script in `scripts/run_example.sh`.
## Tips:If your sketch changes too significantly, try setting a lower `--lr_local`.
If you want more global movement, try increasing `--translation_weight`.
Small visual artifacts can often be fixed by changing the `--seed`.
## Citation
If you find this useful for your research, please cite the following:
```bibtex
@article{gal2023breathing,
title={Breathing Life Into Sketches Using Text-to-Video Priors},
author={Rinon Gal and Yael Vinker and Yuval Alaluf and Amit H. Bermano and Daniel Cohen-Or and Ariel Shamir and Gal Chechik},
year={2023},
eprint={2311.13608},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```