Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/yxtay/matrix-factorization-torch

Matrix Factorization Recommender Models in PyTorch with MovieLens
https://github.com/yxtay/matrix-factorization-torch

Last synced: 15 days ago
JSON representation

Matrix Factorization Recommender Models in PyTorch with MovieLens

Awesome Lists containing this project

README

        

# matrix-factorization-pytorch

Matrix Factorization Recommender Models in PyTorch with MovieLens

## Entrypoints

## Losses

## Data

## References

- Google Slides: [Collaborative Filtering with Implicit Feedback](https://docs.google.com/presentation/d/15nLFgmkSEJPXkhLiXExXDByV_lot7bdHAhtqX_qLp7w/)
- [[2101.08769] Item Recommendation from Implicit Feedback](https://arxiv.org/abs/2101.08769)
- [TensorFlow Recommenders Retrieval](https://www.tensorflow.org/recommenders/api_docs/python/tfrs/tasks/Retrieval)
- BPR: [[1205.2618] BPR: Bayesian Personalized Ranking from Implicit Feedback](https://arxiv.org/abs/1205.2618)
- CCL: [[2109.12613] SimpleX: A Simple and Strong Baseline for Collaborative Filtering](https://arxiv.org/abs/2109.12613)
- SSM: [[2201.02327] On the Effectiveness of Sampled Softmax Loss for Item Recommendation](https://arxiv.org/abs/2201.02327)
- DirectAU: [[2206.12811] Towards Representation Alignment and Uniformity in Collaborative Filtering](https://arxiv.org/abs/2206.12811)
- MAWU: [[2308.06091] Toward a Better Understanding of Loss Functions for Collaborative Filtering](https://arxiv.org/abs/2308.06091)
- InfoNCE+, MINE+: [[2312.08520] Revisiting Recommendation Loss Functions through Contrastive Learning (Technical Report)](https://arxiv.org/abs/2312.08520)
- LogQ correction: [Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations](https://research.google/pubs/sampling-bias-corrected-neural-modeling-for-large-corpus-item-recommendations/)
- MNS: [Mixed Negative Sampling for Learning Two-tower Neural Networks in Recommendations](https://research.google/pubs/mixed-negative-sampling-for-learning-two-tower-neural-networks-in-recommendations/)
- Hashing Trick: [[0902.2206] Feature Hashing for Large Scale Multitask Learning](https://arxiv.org/abs/0902.2206)
- Hash Embeddings: [[1709.03933] Hash Embeddings for Efficient Word Representations](https://arxiv.org/abs/1709.03933)
- Bloom embeddings: [Compact word vectors with Bloom embeddings](https://explosion.ai/blog/bloom-embeddings)