Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/westlake-repl/MicroLens
A Large Short-video Recommendation Dataset with Raw Text/Audio/Image/Videos (Talk Invited by DeepMind).
https://github.com/westlake-repl/MicroLens
audio-recommendation foundation-models image-recommendation large large-language-models llm llm-recommendation short-video text-recommendation video video-generation video-generation-dataset video-recommendation video-understanding video-understanding-dataset
Last synced: 28 days ago
JSON representation
A Large Short-video Recommendation Dataset with Raw Text/Audio/Image/Videos (Talk Invited by DeepMind).
- Host: GitHub
- URL: https://github.com/westlake-repl/MicroLens
- Owner: westlake-repl
- Created: 2023-09-22T02:53:46.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-05-31T13:46:55.000Z (7 months ago)
- Last Synced: 2024-08-04T03:02:16.067Z (4 months ago)
- Topics: audio-recommendation, foundation-models, image-recommendation, large, large-language-models, llm, llm-recommendation, short-video, text-recommendation, video, video-generation, video-generation-dataset, video-recommendation, video-understanding, video-understanding-dataset
- Language: Python
- Homepage:
- Size: 62.5 MB
- Stars: 112
- Watchers: 0
- Forks: 8
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# [MicroLens: A Content-Driven Micro-Video Recommendation Dataset at Scale](https://arxiv.org/pdf/2309.15379.pdf)
![Multi-Modal](https://img.shields.io/badge/Task-Multi--Modal-red)
![Foundation-Model](https://img.shields.io/badge/Task-Foundation--Model-red)
![Video-Understanding](https://img.shields.io/badge/Task-Video--Understanding-red)
![Video-Generation](https://img.shields.io/badge/Task-Video--Generation-red)
![Video-Recommendation](https://img.shields.io/badge/Task-Video--Recommendation-red)Quick Links: [🗃️Dataset](#Dataset) |
[📭Citation](#Citation) |
[🛠️Code](#Code) |
[🚀Baseline Evaluation](#Baseline_Evaluation) |
[🤗Video Understanding Meets Recommender Systems](#Video_Understanding_Meets_Recommender_Systems) |
[💡News](#News)
# Talks & Slides: Invited Talk by Google DeepMind & YouTube & Alipay [(Slides)](https://github.com/westlake-repl/MicroLens/blob/master/MicroLens_DeepMind_Talk.pdf)
# Dataset
Download links: https://recsys.westlake.edu.cn/MicroLens-50k-Dataset/ and https://recsys.westlake.edu.cn/MicroLens-100k-Dataset/
**Email us if you find the link is not available.**
## News
- **2024/05/31**: The "like" and "view" data for each video has been uploaded, please see [MicroLens-50k_likes_and_views.txt](https://recsys.westlake.edu.cn/MicroLens-50k-Dataset/MicroLens-50k_likes_and_views.txt) and [MicroLens-100k_likes_and_views.txt](https://recsys.westlake.edu.cn/MicroLens-100k-Dataset/MicroLens-100k_likes_and_views.txt).
- **2024/04/15**: Our dataset has been added to the MMRec framework, see https://github.com/enoche/MMRec/tree/master/data.
- **2024/04/04**: We have provided extracted multi-modal features (text/images/videos) of MicroLens-100k for multimodal recommendation tasks, see https://recsys.westlake.edu.cn/MicroLens-100k-Dataset/extracted_modality_features/. The preprocessed code is uploaded, see [video_feature_extraction_(from_lmdb).py](https://github.com/westlake-repl/MicroLens/blob/master/Data%20Processing/video_feature_extraction_(from_lmdb).py).
- **2024/03/01**: We have updated the command example for automatically downloading all videos, see https://github.com/westlake-repl/MicroLens/blob/master/Downloader/quick_download.txt.
- **2023/10/21**: We also release a subset of our MicroLens with extracted features for multimodal fairness recommendation, which can be downloaded from https://recsys.westlake.edu.cn/MicroLens-Fairness-Dataset/
- **2023/09/28**: We have temporarily released MicroLens-50K (50,000 users) and MicroLens-100K (100,000 users) along with their associated multimodal data, including raw text, images, audio, video, and video comments. You can access them through the provided link. To acquire the complete MicroLens dataset, kindly reach out to the corresponding author via email. If you have an innovative idea for building a foundational recommendation model but require a large dataset and computational resources, consider joining our lab as an intern. We can provide access to 100 NVIDIA 80G A100 GPUs and a billion-level dataset of user-video/image/text interactions.
# Citation
If you use our dataset, code or find MicroLens useful in your work, please cite our paper as:```bib
@article{ni2023content,
title={A Content-Driven Micro-Video Recommendation Dataset at Scale},
author={Ni, Yongxin and Cheng, Yu and Liu, Xiangyan and Fu, Junchen and Li, Youhua and He, Xiangnan and Zhang, Yongfeng and Yuan, Fajie},
journal={arXiv preprint arXiv:2309.15379},
year={2023}
}
```> :warning: **Caution**: It's prohibited to privately modify the dataset and then offer secondary downloads. If you've made alterations to the dataset in your work, you are encouraged to open-source the data processing code, so others can benefit from your methods. Or notify us of your new dataset so we can put it on this Github with your paper.
# Code
We have released the codes for all algorithms, including VideoRec (which implements all 15 video models in this project), IDRec, and VIDRec. For more details, please refer to the following paths: "Code/VideoRec", "Code/IDRec", and "Code/VIDRec". Each folder contains multiple subfolders, with each subfolder representing the code for a baseline.
## Special instructions on VideoRec
In VideoRec, if you wish to switch to a different training mode, please execute the following Python scripts: 'run_id.py', 'run_text.py', 'run_image.py', and 'run_video.py'. For testing, you can use 'run_id_test.py', 'run_text_test.py', 'run_image_test.py', and 'run_video_test.py', respectively. Please see the path "Code/VideoRec/SASRec" for more details.
Before running the training script, please make sure to modify the dataset path, item encoder, pretrained model path, GPU devices, GPU numbers, and hyperparameters. Additionally, remember to specify the best validation checkpoint (e.g., 'epoch-30.pt') before running the test script.
Note that you will need to prepare an LMDB file and specify it in the scripts before running image-based or video-based VideoRec. To assist with this, we have provided a Python script for LMDB generation. Please refer to 'Data Generation/generate_cover_frames_lmdb.py' for more details.
## Special instructions on IDRec and VIDRec
In IDRec, see `IDRec\process_data.ipynb` to process the interaction data. Execute the following Python scripts: 'main.py' under each folder to run the corresponding baselines. The data path, model parameters can be modified by changing the `yaml` file under each folder.
## Environments
```
python==3.8.12
Pytorch==1.8.0
cudatoolkit==11.1
torchvision==0.9.0
transformers==4.23.1
```# Baseline_Evaluation
# Video_Understanding_Meets_Recommender_Systems
# Ad
#### The laboratory is hiring research assistants, interns, doctoral students, and postdoctoral researchers. Please contact the corresponding author for details.
#### 实验室招聘科研助理,实习生,博士生和博士后,请联系通讯作者。