Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/astridnielsen-lab/imageslegend
https://github.com/astridnielsen-lab/imageslegend
Last synced: 17 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/astridnielsen-lab/imageslegend
- Owner: AstridNielsen-lab
- License: mit
- Created: 2021-04-18T19:38:08.000Z (almost 4 years ago)
- Default Branch: master
- Last Pushed: 2021-04-18T19:38:18.000Z (almost 4 years ago)
- Last Synced: 2024-11-19T18:01:40.534Z (3 months ago)
- Language: Python
- Size: 267 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# medium-show-and-tell-caption-generator
Code to run inference on a Show And Tell Model.
This model is trained to generate captions given an image.## Weblinks
A detailed tutorial for the code can be found here:
https://medium.freecodecamp.org/building-an-image-caption-generator-with-deep-learning-in-tensorflow-a142722e9b1f## Getting started
1. Download Docker image
```bash
$ sudo docker pull colemurray/medium-show-and-tell-caption-generator
```2. Download model graph and weights
```bash
$ sudo docker run -e PYTHONPATH=$PYTHONPATH:/opt/app -v $PWD:/opt/app \
-it colemurray/medium-show-and-tell-caption-generator \
python3 /opt/app/bin/download_model.py \
--model-dir /opt/app/etc
```3. Download vocabulary
```bash
$ curl -o etc/word_counts.txt https://raw.githubusercontent
.com/ColeMurray/medium-show-and-tell-caption-generator/master/etc/word_counts.txt
```4. Add testing images to ```imgs/``` folder
5. Generate captions
```bash
$ sudo docker run -v $PWD:/opt/app \
-e PYTHONPATH=$PYTHONPATH:/opt/app \
-it colemurray/medium-show-and-tell-caption-generator \
python3 /opt/app/medium_show_and_tell_caption_generator/inference.py \
--model_path /opt/app/etc/show-and-tell.pb \
--vocab_file /opt/app/etc/word_counts.txt \
--input_files /opt/app/imgs/*
```
## Run with conda environment
These steps assume that model and vocabulary are already downloaded.1. Build and activate conda environment
```bash
$ conda env create -f environment.yml
$ source activate show-and-tell
```2. Run inference
```bash
$ python -m medium_show_and_tell_caption_generator.inference --model_path='etc/show-and-tell.pb' \
--vocab_file='etc/word_counts.txt' \
--input_files='imgs/*'
```