Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/shijx12/TransferNet
Pytorch implementation of EMNLP 2021 paper "TransferNet: An Effective and Transparent Framework for Multi-hop Question Answering over Relation Graph "
https://github.com/shijx12/TransferNet
Last synced: about 1 month ago
JSON representation
Pytorch implementation of EMNLP 2021 paper "TransferNet: An Effective and Transparent Framework for Multi-hop Question Answering over Relation Graph "
- Host: GitHub
- URL: https://github.com/shijx12/TransferNet
- Owner: shijx12
- Created: 2021-04-15T07:26:18.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2023-06-12T07:21:56.000Z (over 1 year ago)
- Last Synced: 2024-08-03T09:07:06.652Z (5 months ago)
- Language: Python
- Homepage:
- Size: 351 KB
- Stars: 62
- Watchers: 2
- Forks: 18
- Open Issues: 15
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - shijx12/TransferNet - hop Question Answering over Relation Graph 多跳问题解答关系图的有效透明框架,通过每一跳都预测当前关系得分,并更新实体得分,直到最大跳数。预测该问题的跳数,按跳数的概率加权每一跳得分作为实体的最终得分。 (知识图谱问答KBQA、多跳推理 / 其他_文本生成、文本对话)
README
# TransferNet
Pytorch implementation of EMNLP 2021 paper
**[TransferNet: An Effective and Transparent Framework for Multi-hop Question Answering over Relation Graph](https://arxiv.org/abs/2104.07302)**
[Jiaxin Shi](https://shijx12.github.io), Shulin Cao, Lei Hou, [Juanzi Li](http://keg.cs.tsinghua.edu.cn/persons/ljz/), [Hanwang Zhang](http://www.ntu.edu.sg/home/hanwangzhang/#aboutme)We perform transparent multi-hop reasoning over relation graphs of label form (i.e., knowledge graph) and text form. This is an example:
If you find this code useful in your research, please cite
``` tex
@inproceedings{shi2021transfernet,
title={TransferNet: An Effective and Transparent Framework for Multi-hop Question Answering over Relation Graph},
author={Jiaxin Shi, Shulin Cao, Lei Hou, Juanzi Li, Hanwang Zhang},
booktitle={EMNLP},
year={2021}
}
```## dependencies
- pytorch>=1.2.0
- [transformers](https://github.com/huggingface/transformers)
- tqdm
- nltk
- shutil## Prepare Datasets
- [MetaQA](https://goo.gl/f3AmcY), we only use its vanilla version.
- [MovieQA](http://www.thespermwhale.com/jaseweston/babi/movieqa.tar.gz), we need its `knowledge_source/wiki.txt` as the text corpus for our MetaQA-Text experiments. Copy the file into the folder of MetaQA, and put it together with `kb.txt`. The files of MetaQA should be something like
```shell
MetaQA
+-- kb
| +-- kb.txt
| +-- wiki.txt
+-- 1-hop
| +-- vanilla
| | +-- qa_train.txt
| | +-- qa_dev.txt
| | +-- qa_test.txt
+-- 2-hop
+-- 3-hop
```
- [WebQSP](https://drive.google.com/drive/folders/1RlqGBMo45lTmWz9MUPTq-0KcjSd3ujxc?usp=sharing), which has been processed by [EmbedKGQA](https://github.com/malllabiisc/EmbedKGQA).
- [ComplexWebQuestions](https://drive.google.com/file/d/1ua7h88kJ6dECih6uumLeOIV9a3QNdP-g/view?usp=sharing), which has been processed by [NSM](https://github.com/RichardHGL/WSDM2021_NSM).
- [GloVe 300d pretrained vector](http://nlp.stanford.edu/data/glove.840B.300d.zip), which is used in the BiGRU model. After unzipping it, you need to convert the txt file to pickle file by
``` shell
python pickle_glove.py --txt --pt
```## Experiments
### MetaQA-KB
1. Preprocess
```shell
python -m MetaQA-KB.preprocess --input_dir --output_dir
```2. Train
```shell
python -m MetaQA-KB.train --glove_pt --input_dir --save_dir
```3. Predict on the test set
```shell
python -m MetaQA-KB.predict --input_dir --ckpt --mode test
```4. Visualize the reasoning process. It will enter an IPython environment after showing the information of each sample. You can print more variables that you are insterested in. To stop the process, you need to quit the IPython by `Ctrl+D` and then kill the loop by `Ctrl+C` immediately.
```shell
python -m MetaQA-KB.predict --input_dir --ckpt --mode vis
```### MetaQA-Text
1. Preprocess
```shell
python -m MetaQA-Text.preprocess --input_dir --output_dir
```2. Train
```shell
python -m MetaQA-Text.train --glove_pt --input_dir --save_dir
```The scripts for inference and visualization are the same as **MetaQA-KB**. Just change the python module to `MetaQA-Text.predict`.
### MetaQA-Text + 50% KB
1. Preprocess
```shell
python -m MetaQA-Text.preprocess --input_dir --output_dir --kb_ratio 0.5
```2. Train, it needs more active paths than MetaQA-Text
```shell
python -m MetaQA-Text.train --input_dir --save_dir --max_active 800 --batch_size 32
```The scripts for inference and visualization are the same as **MetaQA-Text**.
### WebQSP
WebQSP does not need preprocess. We can directly start the training:```shell
python -m WebQSP.train --input_dir --save_dir
```### ComplexWebQuestions
Similar to WebQSP, CWQ does not need preprocess. We can directly start the training:```shell
python -m CompWebQ.train --input_dir --save_dir
```