Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zsdonghao/im2txt2im
I2T2I: Text-to-Image Synthesis with textual data augmentation
https://github.com/zsdonghao/im2txt2im
image-to-text tensorflow tensorlayer text-to-image
Last synced: 3 months ago
JSON representation
I2T2I: Text-to-Image Synthesis with textual data augmentation
- Host: GitHub
- URL: https://github.com/zsdonghao/im2txt2im
- Owner: zsdonghao
- Created: 2016-12-25T03:52:00.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2019-03-21T07:42:27.000Z (almost 6 years ago)
- Last Synced: 2024-10-03T19:41:01.304Z (4 months ago)
- Topics: image-to-text, tensorflow, tensorlayer, text-to-image
- Language: Python
- Homepage: https://github.com/zsdonghao/tensorlayer
- Size: 1.66 MB
- Stars: 30
- Watchers: 7
- Forks: 3
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Image Captioning and Text-to-Image Synthesis with textual data augmentation
This code run well under python2.7 and TensorFlow 0.11, if you use higher version of TensorFlow you may need to update the `tensorlayer` folder from [TensorLayer Lib](https://github.com/zsdonghao/tensorlayer).
## Usage
### 1. Prepare MSCOCO data and Inception model
* Before you run the scripts, you need to follow Google's [setup guide]((https://github.com/tensorflow/models/tree/master/im2txt)), and setup the model, ckpt and data directories in *.py.
- Creat a ``data`` folder.
- Download and Preprocessing MSCOCO Data [click here](https://github.com/tensorflow/models/tree/master/research/im2txt).
- Download the Inception_V3 CKPT [click here](https://github.com/tensorflow/models/tree/master/slim#Pretrained).### 2. Train image captioning model
* Train your image captioning model on MSCOCO by following my [other repo](https://github.com/zsdonghao/Image-Captioning).### 3. Setup your paths
* in `train_im2txt2im_coco_64.py`
* config your image directory here
`images_train_dir = '/home/.../mscoco/raw-data/train2014/'`
* config the vocabulary and model of you image captioning module `DIR = "/home/..."`
* directory containing model checkpoints.
`CHECKPOINT_DIR = DIR + "/model/train"`
* vocabulary file generated by the preprocessing script.
`VOCAB_FILE = DIR + "/data/mscoco/word_counts.txt"`### 4. Train text-to-image synthesis with image captioning
* `model_im2txt.py` model for image captioning
* `train_im2txt2im_coco_64.py` script for training I2T2I
* `utils.py` script for utility functions
## Results### 1. Here are some results on MSCOCO
### 2. Transfer learning on MHP dataset
## Citation
* If you find it is useful, please cite:```
@article{hao2017im2txt2im,
title={I2T2I: LEARNING TEXT TO IMAGE SYNTHESIS WITH TEXTUAL DATA AUGMENTATION},
author={Hao Dong, Jingqing Zhang, Douglas McIlwraith, Yike Guo},
journal={ICIP},
year={2017}
}
```