Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/eczy/valence-shifted-caption-generation
https://github.com/eczy/valence-shifted-caption-generation
Last synced: about 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/eczy/valence-shifted-caption-generation
- Owner: eczy
- Created: 2018-11-12T19:10:45.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2018-12-13T06:29:49.000Z (about 6 years ago)
- Last Synced: 2024-10-27T17:37:52.296Z (3 months ago)
- Size: 406 MB
- Stars: 3
- Watchers: 4
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 595-valence-shifting-captions
![alt text](https://github.com/eczy/595-valence-shifting-captions/blob/master/projectFlow.png)
## Installation
This project assumes that you are using an anaconda/miniconda environment running Python 3.6.
If you already have a Python 3.6 environment, activiate it. Otherwise, follow these steps to create and activate a new environment.
1. Install a recent version of Anaconda
2. `conda create -n test595ProjInstall python=3.6 anaconda`
3. `source activate test595ProjInstall`
4. You should now be in the new environment and should see "(test595ProjInstall)" in your command line promptDependencies:
- `keras==1.2.2`
- `tensorflow==0.12.1`
- `tqdm`
- `numpy`
- `pandas`
- `matplotlib`
- `pillow`
- `jupyterlab` (or jupyter notebook if you prefer -- package dependency is then `jupyter`)
- `stanfordcorenlp` (python wrapper)
- `textblob`
- `progressbar2`1. Clone this repo
2. Change directories into the repo with `cd 595-valence-shifting-captions/`
3. Pull the Image-Captioning submodule by running `git submodule update --init Image-Captioning`
4. Checkout the master branch: `git checkout master
5. Install the above dependencies using `pip install`
6. Download StanfordCoreNLP 3.9.2 zip from http://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip and unpack the zip inside 595-valence-shifting-captions/
7. Download the Amazon data submitted by the team - the necessary data has already been parsed (parsing yourself will take several days of computing time). Unzip the file and move `amazonRawData/`, `amazon_counts/`, `amazon_pairTuples/`, and `amazon_sentenceTuples/` into `595-valence-shifting-captions/valanceModel/`
8. Download the Flickr8k dataset submitted by the team and move both `Flickr8k_text/` and `Flicker8k_Dataset/` into `595-valence-shifting-captions/Image-Captioning/`## Processing an Example Image
1. Run Jupyter lab or Jupyter notebook *AS ROOT* (this is unfortunately required by StanfordCoreNLP).
The command is `jupyter lab --allow-root` or `jupyter notebook --allow-root` depending on your preference.
- Note: if you are not using your base anaconda environment, you should add the
environment you will be using to the set of ipython kernels so that the installed
dependencies can be used by the notebook. If this is the case, then activate your desired environment and run
`ipykernel install --user --name --display-name "Python ()"`
2. Open `Image Captioning InceptionV3-minimal.ipynb` using the desired kernel (see above).
3. Restart the kernel and run the full notebook. Shifted captions for 5 images from
the testing dataset will be generated at the bottom of the page.