Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/HKUDS/LLMRec
[WSDM'2024 Oral] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
https://github.com/HKUDS/LLMRec
colloborative-filtering content-based-recommendation data-augmentation-strategies graph-augmentation graph-learning multi-modal-recommendation recommendation-system recommendation-with-side-information
Last synced: 28 days ago
JSON representation
[WSDM'2024 Oral] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
- Host: GitHub
- URL: https://github.com/HKUDS/LLMRec
- Owner: HKUDS
- License: apache-2.0
- Created: 2023-10-23T01:56:37.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-06-10T06:39:28.000Z (6 months ago)
- Last Synced: 2024-08-04T03:02:13.960Z (4 months ago)
- Topics: colloborative-filtering, content-based-recommendation, data-augmentation-strategies, graph-augmentation, graph-learning, multi-modal-recommendation, recommendation-system, recommendation-with-side-information
- Language: Python
- Homepage: https://arxiv.org/abs/2311.00423
- Size: 8.34 MB
- Stars: 316
- Watchers: 4
- Forks: 37
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
- awesome-recommend-system-pretraining-papers - [link
- Awesome-LLM-for-RecSys - [Link
- StarryDivineSky - HKUDS/LLMRec - i 交互边缘,ii) 增强项目节点属性,以及 iii) 从自然语言的角度直观地进行用户节点分析来增强交互图。 (推荐系统算法库与列表 / 网络服务_其他)
README
# LLMRec: Large Language Models with Graph Augmentation for Recommendation
PyTorch implementation for WSDM 2024 paper [LLMRec: Large Language Models with Graph Augmentation for Recommendation](https://arxiv.org/pdf/2311.00423.pdf).
[Wei Wei](#), [Xubin Ren](https://rxubin.com/), [Jiabin Tang](https://tjb-tech.github.io/), [Qingyong Wang](#), [Lixin Su](#), [Suqi Cheng](#), [Junfeng Wang](#), [Dawei Yin](https://www.yindawei.com/) and [Chao Huang](https://sites.google.com/view/chaoh/home)*.
(*Correspondence)**[Data Intelligence Lab](https://sites.google.com/view/chaoh/home)@[University of Hong Kong](https://www.hku.hk/)**, Baidu Inc.
[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://www.youtube.com/channel/UC1wKlPPlP9zKGYk62yR0K_g)This repository hosts the code, original data and augmented data of **LLMRec**.
-----------
LLMRec is a novel framework that enhances recommenders by applying three simple yet effective LLM-based graph augmentation strategies to recommendation system. LLMRec is to make the most of the content within online platforms (e.g., Netflix, MovieLens) to augment interaction graph by i) reinforcing u-i interactive edges, ii) enhancing item node attributes, and iii) conducting user node profiling, intuitively from the natural language perspective.
-----------
## 🎉 News 📢📢
- [x] [2024.3.20] 🚀🚀 📢📢📢📢🌹🔥🔥🚀🚀 Because baselines `LATTICE` and `MMSSL` require some minor modifications, we provide code that can be easily run by simply modifying the dataset path.
- [x] [2023.11.3] 🚀🚀 Release the script for constructing the prompt.
- [x] [2023.11.1] 🔥🔥 Release the multi-modal datasets (Netflix, MovieLens), including textual data and visual data.
- [x] [2023.11.1] 🚀🚀 Release LLM-augmented textual data(by gpt-3.5-turbo-0613), and LLM-augmented embedding(by text-embedding-ada-002).
- [x] [2023.10.28] 🔥🔥 The full paper of our LLMRec is available at [LLMRec: Large Language Models with Graph Augmentation for Recommendation](https://arxiv.org/pdf/2311.00423.pdf).
- [x] [2023.10.28] 🚀🚀 Release the code of LLMRec.
## 👉 TODO
- [ ] Provide different larger version of the datasets.
- [ ] ...-----------
Dependencies
```
pip install -r requirements.txt
```Usage
Stage 1: LLM-based Data Augmentation
```
cd LLMRec/LLM_augmentation/
python ./gpt_ui_aug.py
python ./gpt_user_profiling.py
python ./gpt_i_attribute_generate_aug.py
```Stage 2: Recommender training with LLM-augmented Data
```
cd LLMRec/
python ./main.py --dataset {DATASET}
```
Supported datasets: `netflix`, `movielens`Specific code execution example on 'netflix':
```
# LLMRec
python ./main.py# w/o-u-i
python ./main.py --aug_sample_rate=0.0# w/o-u
python ./main.py --user_cat_rate=0# w/o-u&i
python ./main.py --user_cat_rate=0 --item_cat_rate=0# w/o-prune
python ./main.py --prune_loss_drop_rate=0
```-----------
Datasets
```
├─ LLMRec/
├── data/
├── netflix/
...
```Multi-modal Datasets
🌹🌹 Please cite our paper if you use the 'netflix' dataset~ ❤️We collected a multi-modal dataset using the original [Netflix Prize Data](https://www.kaggle.com/datasets/netflix-inc/netflix-prize-data) released on the [Kaggle](https://www.kaggle.com/) website. The data format is directly compatible with state-of-the-art multi-modal recommendation models like [LLMRec](https://github.com/HKUDS/LLMRec), [MMSSL](https://github.com/HKUDS/MMSSL), [LATTICE](https://github.com/CRIPAC-DIG/LATTICE), [MICRO](https://github.com/CRIPAC-DIG/MICRO), and others, without requiring any additional data preprocessing.
`Textual Modality:` We have released the item information curated from the original dataset in the "item_attribute.csv" file. Additionally, we have incorporated textual information enhanced by LLM into the "augmented_item_attribute_agg.csv" file. (The following three images represent (1) information about Netflix as described on the Kaggle website, (2) textual information from the original Netflix Prize Data, and (3) textual information augmented by LLMs.)
`Visual Modality:` We have released the visual information obtained from web crawling in the "Netflix_Posters" folder. (The following image displays the poster acquired by web crawling using item information from the Netflix Prize Data.)
Original Multi-modal Datasets & Augmented Datasets
Download the Netflix dataset.
🚀🚀
We provide the processed data (i.e., CF training data & basic user-item interactions, original multi-modal data including images and text of items, encoded visual/textual features and LLM-augmented text/embeddings). 🌹 We hope to contribute to our community and facilitate your research 🚀🚀 ~- `netflix`: [Google Drive Netflix](https://drive.google.com/drive/folders/1BGKm3nO4xzhyi_mpKJWcfxgi3sQ2j_Ec?usp=drive_link). [🌟(Image&Text)](https://drive.google.com/file/d/1euAnMYD1JBPflx0M86O2M9OsbBSfrzPK/view?usp=drive_link)
Encoding the Multi-modal Content.
We use [CLIP-ViT](https://huggingface.co/openai/clip-vit-base-patch32) and [Sentence-BERT](https://www.sbert.net/) separately as encoders for visual side information and textual side information.
-----------
Prompt & Completion Example
LLM-based Implicit Feedback Augmentation
> Prompt
>> Recommend user with movies based on user history that each movie with title, year, genre. History: [332] Heart and Souls (1993), Comedy|Fantasy [364] Men with Brooms(2002), Comedy|Drama|Romance Candidate: [121]The Vampire Lovers (1970), Horror [155] Billabong Odyssey (2003),Documentary [248]The Invisible Guest 2016, Crime, Drama, Mystery Output index of user's favorite and dislike movie from candidate.Please just give the index in [].> Completion
>> 248 121LLM-based User Profile Augmentation
> Prompt
>> Generate user profile based on the history of user, that each movie with title, year, genre. History: [332] Heart and Souls (1993), Comedy|Fantasy [364] Men with Brooms (2002), Comedy|Drama|Romance Please output the following infomation of user, output format: {age: , gender: , liked genre: , disliked genre: , liked directors: , country: , language: }> Completion
>> {age: 50, gender: female, liked genre: Comedy|Fantasy, Comedy|Drama|Romance, disliked genre: Thriller, Horror, liked directors: Ron Underwood, country: Canada, United States, language: English}LLM-based Item Attributes Augmentation
> Prompt
>> Provide the inquired information of the given movie. [332] Heart and Souls (1993), Comedy|Fantasy The inquired information is: director, country, language. And please output them in form of: director, country, language> Completion
>> Ron Underwood, USA, EnglishAugmented Data
Augmented Implicit Feedback (Edge)
For each user, 0 represents a positive sample, and 1 represents a negative sample.
Augmented User Profile (User Node)
For each user, the dictionary stores augmented information such as 'age,' 'gender,' 'liked genre,' 'disliked genre,' 'liked directors,' 'country,' and 'language.'
##### Augmented item attribute
For each item, the dictionary stores augmented information such as 'director,' 'country,' and 'language.'
Candidate Preparing for LLM-based Implicit Feedback Augmentation
step 1: select base model such as MMSSL or LATTICE
step 2: obtain user embedding and item embedding
step 3: generate candidate
```
_, candidate_indices = torch.topk(torch.mm(G_ua_embeddings, G_ia_embeddings.T), k=10)
pickle.dump(candidate_indices.cpu(), open('./data/' + args.datasets + '/candidate_indices','wb'))
```
Example of specific candidate data.
```
In [3]: candidate_indices
Out[3]:
tensor([[ 9765, 2930, 6646, ..., 11513, 12747, 13503],
[ 3665, 8999, 2587, ..., 1559, 2975, 3759],
[ 2266, 8999, 1559, ..., 8639, 465, 8287],
...,
[11905, 10195, 8063, ..., 12945, 12568, 10428],
[ 9063, 6736, 6938, ..., 5526, 12747, 11110],
[ 9584, 4163, 4154, ..., 2266, 543, 7610]])In [4]: candidate_indices.shape
Out[4]: torch.Size([13187, 10])
```-----------
Citing
If you find this work helpful to your research, please kindly consider citing our paper.
```
@article{wei2023llmrec,
title={LLMRec: Large Language Models with Graph Augmentation for Recommendation},
author={Wei, Wei and Ren, Xubin and Tang, Jiabin and Wang, Qinyong and Su, Lixin and Cheng, Suqi and Wang, Junfeng and Yin, Dawei and Huang, Chao},
journal={arXiv preprint arXiv:2311.00423},
year={2023}
}
```## Acknowledgement
The structure of this code is largely based on [MMSSL](https://github.com/HKUDS/MMSSL), [LATTICE](https://github.com/CRIPAC-DIG/LATTICE), [MICRO](https://github.com/CRIPAC-DIG/MICRO). Thank them for their work.