https://github.com/lukas-blecher/blender-dextr
Use Deep Extreme Cut (DEXTR) to mask image sequences with motion tracking data from Blender
https://github.com/lukas-blecher/blender-dextr
blender deep-learning machine-learning masking tracking video-editing
Last synced: about 2 months ago
JSON representation
Use Deep Extreme Cut (DEXTR) to mask image sequences with motion tracking data from Blender
- Host: GitHub
- URL: https://github.com/lukas-blecher/blender-dextr
- Owner: lukas-blecher
- License: gpl-3.0
- Created: 2019-10-20T18:25:03.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2020-01-05T18:35:15.000Z (almost 6 years ago)
- Last Synced: 2025-05-07T12:13:10.488Z (5 months ago)
- Topics: blender, deep-learning, machine-learning, masking, tracking, video-editing
- Language: Python
- Size: 7.63 MB
- Stars: 11
- Watchers: 2
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Blender DEXTR
**Note:** Since I published this repository I have written a better and improved Addon with the same idea. It can be found here: [https://github.com/lukas-blecher/AutoMask](https://github.com/lukas-blecher/AutoMask)
### Base Repository
This repository is an application of the [Deep Extreme Cut (DEXTR) repository](https://github.com/scaelles/DEXTR-PyTorch) by scaelles for [Blender](https://www.blender.org).
This enabels you to get automatic masks from only a few tracking points. A downside is that the poits are determining the bounding box of the mask. There is an option for padding implemented, however the performance of the pretrained model suffers greatly. This implementation is also unable to take previous frames into account and only works on a frame to frame basis.### Video to Image Sequence
The required input are the independent frames. For the extraction of the frames from the video, I've added the python script `framecutter.py` that saves the pictures in a compatible manner using [ffmpeg](https://www.ffmpeg.org).### Blender
Finally, I've compiled an Add-on for Blender 2.7.x to extract the tracking data and saves it as a .csv file. To use it you just need to go to `File > User Preferences > Add-ons` and press `Install Add-on from File...`. Next choose the python file `export_tracks_blender.py`.After activation there is the new option `Movie clip Editor > Tools Panel > Solve > Export tracks` in Blender.
I recommend using an `Dilate/Erode`-node with `Feather` in the `Node Editor` with a negative distance.

### Usage
* Track the bounding box of the object
* Export the tracking data
* Get Image Sequence
* Execute a command like `python demo.py --input PATH_FOLDER_OF_IMAGE_SEQUENCE --output OUTPUT_FOLDER --anchor-points PATH_TO_FOLDER_WITH_TRACKING_DATA`
* Add `Image Sequence` in the Node Editor in Blender. (Optional: Add `Dilate/Erode`-node)This is how an output could look like:

Original Video: https://www.youtube.com/watch?v=1Cir0J6jwBM---
#### Original README:
## Deep Extreme Cut (DEXTR)
Visit our [project page](http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr) for accessing the paper, and the pre-computed results.
This is the implementation of our work `Deep Extreme Cut (DEXTR)`, for object segmentation from extreme points.
#### This code was ported to PyTorch 0.4.0! For the previous version of the code with Pytorch 0.3.1, please checkout [this branch](https://github.com/scaelles/DEXTR-PyTorch/tree/PyTorch-0.3.1).
#### NEW: Keras with Tensorflow backend implementation also available: [DEXTR-KerasTensorflow](https://github.com/scaelles/DEXTR-KerasTensorflow )!### Abstract
This paper explores the use of extreme points in an object (left-most, right-most, top, bottom pixels) as input to obtain precise object segmentation for images and videos. We do so by adding an extra channel to the image in the input of a convolutional neural network (CNN), which contains a Gaussian centered in each of the extreme points. The CNN learns to transform this information into a segmentation of an object that matches those extreme points. We demonstrate the usefulness of this approach for guided segmentation (grabcut-style), interactive segmentation, video object segmentation, and dense segmentation annotation. We show that we obtain the most precise results to date, also with less user input, in an extensive and varied selection of benchmarks and datasets.### Installation
The code was tested with [Miniconda](https://conda.io/miniconda.html) and Python 3.6. After installing the Miniconda environment:0. Clone the repo:
```Shell
git clone https://github.com/scaelles/DEXTR-PyTorch
cd DEXTR-PyTorch
```
1. Install dependencies:
```Shell
conda install pytorch torchvision -c pytorch
conda install matplotlib opencv pillow scikit-learn scikit-image
```
2. Download the model by running the script inside ```models/```:
```Shell
cd models/
chmod +x download_dextr_model.sh
./download_dextr_model.sh
cd ..
```
The default model is trained on PASCAL VOC Segmentation train + SBD (10582 images). To download models trained on PASCAL VOC Segmentation train or COCO, please visit our [project page](http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr/#downloads), or keep scrolling till the end of this README.3. To try the demo version of DEXTR, please run:
```Shell
python demo.py
```
If installed correctly, the result should look like this:
To train and evaluate DEXTR on PASCAL (or PASCAL + SBD), please follow these additional steps:
4. Install tensorboard (integrated with PyTorch).
```Shell
pip install tensorboard tensorboardx
```5. Download the pre-trained PSPNet model for semantic segmentation, taken from [this](https://github.com/isht7/pytorch-deeplab-resnet) repository.
```Shell
cd models/
chmod +x download_pretrained_psp_model.sh
./download_pretrained_psp_model.sh
cd ..
```
6. Set the paths in ```mypath.py```, so that they point to the location of PASCAL/SBD dataset.7. Run ```python train_pascal.py```, after changing the default parameters, if necessary (eg. gpu_id).
Enjoy!!
### Pre-trained models
You can use the following DEXTR models under MIT license as pre-trained on:
* [PASCAL + SBD](https://data.vision.ee.ethz.ch/kmaninis/share/DEXTR/Downloads/models/dextr_pascal-sbd.pth), trained on PASCAL VOC Segmentation train + SBD (10582 images). Achieves mIoU of 91.5% on PASCAL VOC Segmentation val.
* [PASCAL](https://data.vision.ee.ethz.ch/kmaninis/share/DEXTR/Downloads/models/dextr_pascal.pth), trained on PASCAL VOC Segmentation train (1464 images). Achieves mIoU of 90.5% on PASCAL VOC Segmentation val.
* [COCO](https://data.vision.ee.ethz.ch/kmaninis/share/DEXTR/Downloads/models/dextr_coco.pth), trained on COCO train 2014 (82783 images). Achieves mIoU of 87.8% on PASCAL VOC Segmentation val.### Citation
If you use this code, please consider citing the following papers:@Inproceedings{Man+18,
Title = {Deep Extreme Cut: From Extreme Points to Object Segmentation},
Author = {K.K. Maninis and S. Caelles and J. Pont-Tuset and L. {Van Gool}},
Booktitle = {Computer Vision and Pattern Recognition (CVPR)},
Year = {2018}
}@InProceedings{Pap+17,
Title = {Extreme clicking for efficient object annotation},
Author = {D.P. Papadopoulos and J. Uijlings and F. Keller and V. Ferrari},
Booktitle = {ICCV},
Year = {2017}
}We thank the authors of [pytorch-deeplab-resnet](https://github.com/isht7/pytorch-deeplab-resnet) for making their PyTorch re-implementation of DeepLab-v2 available!
If you encounter any problems please contact us at {kmaninis, scaelles}@vision.ee.ethz.ch.