Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jegp/multimodalrnnproject
A project on using RNN on multimodal data for person recognition
https://github.com/jegp/multimodalrnnproject
Last synced: 11 days ago
JSON representation
A project on using RNN on multimodal data for person recognition
- Host: GitHub
- URL: https://github.com/jegp/multimodalrnnproject
- Owner: Jegp
- License: gpl-3.0
- Created: 2017-05-29T22:21:12.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2017-06-06T20:53:27.000Z (over 7 years ago)
- Last Synced: 2024-10-28T12:58:58.788Z (about 2 months ago)
- Language: Python
- Size: 15.8 MB
- Stars: 2
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Multimodal RNNs
A project on using recurrent neural networks (LSTMs) on multimodal data for person recognition## Prerequisites
To run this project you need to install* Python 3
* Keras - deep learning framework
* Tensorflow or Theano - machine learning frameworks (required by Keras)
* MatplotlibTo use the hyperparameter tuning, you are required to install Hyperas (Keras + Hyperopt),
which can be found here: https://github.com/maxpumperla/hyperasAll of the above can be installed using pip. I strongly recommend doing this in a virtualenv.
## File layout
We have tree models: one using only audio data, one using only video data and one using both
(dualmodal).The files containing ``hyper`` is concerned with hyper-parameter optimisation, while the files
``unimodal_audio.py``, ``unimodal_video.py`` and ``dualmodal.py`` contains the optimised
model parameters.## How to use
To run the already optimized files, simply pull this project and runpython3 dualmodal.py
This example runs the dualmodal model.
### Pre-processing
To run the pre-processing step, you need to download the GRID dataset (http://spandh.dcs.shef.ac.uk/gridcorpus/)
into the same folder as this repository. To run the pre-processing, simply run the ``pre_chunks.py`` file:python3 pre_chunks.py