Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/praful932/tf-rec

Tf-Rec is a python💻 package for building⚒ Recommender Systems. It is built on top of Keras and Tensorflow 2 to utilize GPU Acceleration during training.
https://github.com/praful932/tf-rec

deep-learning keras machine-learning machine-learning-algorithms matrix-factorization python python-library recommendation-system recommender-system svd svdplusplus svdpp tensorflow

Last synced: 24 days ago
JSON representation

Tf-Rec is a python💻 package for building⚒ Recommender Systems. It is built on top of Keras and Tensorflow 2 to utilize GPU Acceleration during training.

Awesome Lists containing this project

README

        

![Tf-rec](https://user-images.githubusercontent.com/45713796/102780477-00b97980-43bc-11eb-88c7-a2d62592c50d.png)

**Tf-Rec** is a python💻 package for building⚒ Recommender Systems. It is built on top of **Keras** and **Tensorflow 2** to utilize _GPU Acceleration_ during training.

[![PyPI version](https://badge.fury.io/py/tfrec.svg)](https://pypi.org/project/tfrec/)
[![Python](https://img.shields.io/pypi/pyversions/tfrec.svg?style=flat)](https://badge.fury.io/py/tfrec)
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/Praful932/Tf-Rec/blob/master/CONTRIBUTING.md)


![Tests](https://github.com/Praful932/Tf-Rec/workflows/Tests/badge.svg)
![Publish(Package)](https://github.com/Praful932/Tf-Rec/workflows/Publish(Package)/badge.svg)
![Deploy(Docs)](https://github.com/Praful932/Tf-Rec/workflows/Deploy(Docs)/badge.svg)


[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FPraful932%2FTf-Rec&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https://hits.seeyoufarm.com)
![GitHub stars](https://img.shields.io/github/stars/Praful932/Tf-Rec?style=social) ![GitHub forks](https://img.shields.io/github/forks/Praful932/Tf-Rec?style=social) ![GitHub watchers](https://img.shields.io/github/watchers/Praful932/Tf-Rec?style=social)

# Contents

- [Why Tf-Rec?](#user-content-why-tf-rec-) 🧐
- [Installation](#user-content-installation-) ⚡
- [Quick Start & Docs](#user-content-quick-start--documentation-) 📝
- [API Docs](#user-content-api-docs)
- [SVD Example](#user-content-svd-example)
- [SVD++ Example](#user-content-svd-example-1)
- [KFold Cross Validation Example](#user-content-kfold-cross-validation-example)
- [Supported Algorithms](#user-content-supported-algorithms-) 🎯
- [Benchmark](#user-content-benchmark-) 🔥
- [Contribute](https://github.com/Praful932/Tf-Rec/blob/main/CONTRIBUTING.md) 😇

### Why Tf-Rec? 🧐

There are several open source libraries which implement popular recommender algorithms in, infact this library is inspired by them - **Surprise** and **Funk-SVD**. However, there is bottleneck in training time, when the training data is huge. This can be solved by using ready frameworks like **Tensorflow 2** & **Keras** which support running computations on GPU thus delivering speed and higher throughput. Building on top of such frameworks also provide us with off the shelf capabilities such as using different optimizers, Data API, exporting the model to other platforms and much more. Tfrec provides _ready implementations of algorithms_ which can be directly used with few lines of Tensorflow Code. Currently this library supports [these](#user-content-supported-algorithms-) algorithms.

### Installation ⚡

The package is available on PyPi:

`pip install tfrec`

### Quick Start & Documentation 📝

#### API Docs

- [API Documentation](https://tfrec.netlify.app/)

#### SVD Example

```python
from tfrec.models import SVD
from tfrec.datasets import fetch_ml_100k
from tfrec.utils import preprocess_and_split
import numpy as np

data = fetch_ml_100k()
dataset, user_item_encodings = preprocess_and_split(data)

(x_train, y_train), (x_test, y_test) = dataset
(user_to_encoded, encoded_to_user,item_to_encoded, encoded_to_item) = user_item_encodings

num_users = len(np.unique(data['userId']))
num_movies = len(np.unique(data['movieId']))
global_mean = np.mean(data['rating'])

model = SVD(num_users, num_movies, global_mean)
model.compile(loss = 'mean_squared_error', optimizer = 'adam')
model.fit(x_train, y_train)
```

```
2521/2521 [==============================] - 11s 4ms/step - loss: 0.9963
```

#### SVD++ Example

```python
from tfrec.models import SVDpp

model = SVDpp(num_users, num_movies, global_mean)
# Needs to be called before fitting
model.implicit_feedback(x_train)
model.compile(loss = 'mean_squared_error', optimizer = 'adam')

model.fit(x_train, y_train)
```

```
2521/2521 [==============================] - 49s 20ms/step - loss: 1.0332
```

#### KFold Cross Validation Example

```python
from tfrec.utils import cross_validate
model = SVD(num_users, num_movies, global_mean)
model.compile(loss = 'mean_squared_error', optimizer = 'adam', metrics=['mae','RootMeanSquaredError'])
all_metrics = cross_validate(model, x_train, y_train)
```

```
Mean Loss : 0.899022102355957
Mean Mae : 0.6596329569816589
Mean Root_mean_squared_error : 0.8578477501869202
```

### Supported Algorithms 🎯

Currently the library supports these algorithms:

- **SVD** - Despite the Name, it is different from the Eigen Decomposition of Assymmetric Matrices. In a gist, it approximates a vector for each user and each item. The vector contains latent factors which signify for brevity sake, if the item is a movie the movie vector would represent - how much the movie contains action or romance likewise. Similarly for the user.
The predicted rating is given by:

![](https://latex.codecogs.com/png.latex?\hat{r}_{u,&space;i}&space;=&space;\bar{r}&space;+&space;b_{u}&space;+&space;b_{i}&space;+&space;\sum_{f=1}^{F}&space;p_{u,&space;f}&space;*&space;q_{i,&space;f})

- **SVD++** - This is an extension of SVD which incorporates implicit feedback, by also taking into account the interactions between the user and the item by involving another factor. More Precisely, it takes into account the fact that the user has rated an item itself as a preference than an item which the user has not rated.
The predicted rating is given by:

![image](https://user-images.githubusercontent.com/45713796/101982506-6ca03180-3c9a-11eb-8285-f9f243ab877c.png)

### Benchmark 🔥

Both of the algorithms were tested on Google Collab using a GPU Runtime. The dataset used was the MovieLens-100k. Default parameters were used for intilization of Model. Optimizer used was **Adam** and batch size used was **128**.
These are the 5-Fold Cross Validation Scores:

| Algorithm | Mean MAE | Mean RMSE | Time per Epoch |
| --------- | -------- | --------- | -------------- |
| **SVD** | 0.6701 | 0.8694 | < 3 sec |
| **SVD++** | 0.6838 | 0.8862 | < 45 sec |