https://github.com/surajiyer/recommender-systems
Recommender systems, 2017-18
https://github.com/surajiyer/recommender-systems
cbow cifar100 cnn nlp recommender-system siamese-network skipgram vgg16
Last synced: 2 months ago
JSON representation
Recommender systems, 2017-18
- Host: GitHub
- URL: https://github.com/surajiyer/recommender-systems
- Owner: surajiyer
- Created: 2018-03-06T08:42:03.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2018-09-01T04:35:17.000Z (about 7 years ago)
- Last Synced: 2025-03-29T18:11:12.610Z (7 months ago)
- Topics: cbow, cifar100, cnn, nlp, recommender-system, siamese-network, skipgram, vgg16
- Language: Jupyter Notebook
- Size: 6.15 MB
- Stars: 0
- Watchers: 4
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Recommender systems, 2017-18
This repository contains notebooks and other files associated with my class on *Recommender systems* from Eindhoven University of Technology.
- Section 1: Learning CBoW and Skipgram models (with and without negative sampling) to generate words embeddings. Applied to analogy detection and sentence reconstruction.
- Section 2: Learning about neural codes (link to paper: https://arxiv.org/pdf/1404.1777.pdf) to generate image embeddings with and without PCA compression. Three different ways to generate neural codes: Convolutional networks, Denoising Autoencoders, Sparse Autoencoders. Applied to image retrieval with Nearest neighbor detection using neural codes as feature space.
- Section 3:
- Learning about Siamese networks & one-shot learning. Training a siamese network on 80 classes from Cifar-100 dataset, then use neural codes to perform multiple N-way one-shot learning tasks on remaining 20 classes. Model performs better than random guessing.
- Learning about LSTM, GRU, and Bidirectional variants. Applied in generating document sequence embeddings based on binary sentiment classification task. Also used generated embeddings to one-shot learning task on Amazon product reviews dataset.
- Combining the knowledge of Siamese networks and RNNs to Image-Caption retrieval problem. Preprocessed the data to create triplet instances [, ...]. Designed a NN architecture that generates image and caption neural codes, combines them, and optimizes a max-margin loss function to increase the bounds between the positive pair embedding and negative pair embedding. Laslty, we use the *kNN* algorithm to recommend top-k nearest images given a caption or top-k nearest captions given an image.