Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/caojiangxia/BiGI
[WSDM 2021]Bipartite Graph Embedding via Mutual Information Maximization
https://github.com/caojiangxia/BiGI
bipartite-graphs deep-infomax graph-embedding graph-neural-networks recommender-system self-supervised-learning
Last synced: 2 days ago
JSON representation
[WSDM 2021]Bipartite Graph Embedding via Mutual Information Maximization
- Host: GitHub
- URL: https://github.com/caojiangxia/BiGI
- Owner: caojiangxia
- License: mit
- Created: 2020-10-19T08:16:52.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2021-07-06T09:49:17.000Z (over 3 years ago)
- Last Synced: 2024-08-02T13:21:56.327Z (3 months ago)
- Topics: bipartite-graphs, deep-infomax, graph-embedding, graph-neural-networks, recommender-system, self-supervised-learning
- Language: Python
- Homepage: https://arxiv.org/abs/2012.05442
- Size: 1.89 MB
- Stars: 74
- Watchers: 5
- Forks: 13
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
BiGI
===The source code is for the paper: ”Bipartite Graph Embedding via Mutual Information Maximization" accepted in WSDM 2021 by Jiangxia Cao*, Xixun Lin*, Shu Guo, Luchen Liu, Tingwen Liu, Bin Wang (* means equal contribution).
```
@inproceedings{bigi2021,
title={Bipartite Graph Embedding via Mutual Information Maximization},
author={Cao*, Jiangxia and Lin*, Xixun and Guo, Shu and Liu, Luchen and Liu, Tingwen and Wang, Bin},
booktitle={ACM International Conference on Web Search and Data Mining (WSDM)},
year={2021}
}
```Requirements
---Python=3.6.2
PyTorch=1.1.0
CUDA=9.0
Scikit-Learn = 0.22
Scipy = 1.3.1
Preparation
---Some datasets have been included in the `./dataset` directory. Other datasets can be downloaded from the [official website](https://grouplens.org/datasets/movielens/).
Usage
---To run this project, please make sure that you have the following packages being downloaded. Our experiments are conducted on a PC with an Intel Xeon E5 2.1GHz CPU and a Tesla V100 GPU.
For running DBLP:
```shell
CUDA_VISIBLE_DEVICES=1 nohup python -u train_rec.py --id dblp --struct_rate 0.00001 --GNN 2 > BiGIdblp.log 2>&1&
```For running ML-100K:
```shell
CUDA_VISIBLE_DEVICES=1 nohup python -u train_rec.py --data_dir dataset/movie/ml-100k/1/ --batch_size 128 --id ml100k --struct_rate 0.0001 --GNN 2 > BiGI100k.log 2>&1&
```For running ML-10M:
```shell
CUDA_VISIBLE_DEVICES=1 nohup python -u train_rec.py --batch_size 100000 --data_dir dataset/movie/ml-10m/ml-10M100K/1/ --id ml10m --struct_rate 0.00001 > BiGI10m.log 2>&1&
```For running Wiki(5:5):
```shell
CUDA_VISIBLE_DEVICES=1 nohup python -u train_lp.py --id wiki5 --struct_rate 0.0001 --GNN 2 > BiGIwiki5.log 2>&1&
```For running Wiki(4:6):
```shell
CUDA_VISIBLE_DEVICES=1 nohup python -u train_lp.py --data_dir dataset/wiki/4/ --id wiki4 --struct_rate 0.0001 --GNN 2 > BiGIwiki4.log 2>&1&
```