Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/MiuLab/SlotGated-SLU
Slot-Gated Modeling for Joint Slot Filling and Intent Prediction
https://github.com/MiuLab/SlotGated-SLU
intent-prediction joint-models natural-language-understanding slot-filling spoken-language-understanding
Last synced: 2 days ago
JSON representation
Slot-Gated Modeling for Joint Slot Filling and Intent Prediction
- Host: GitHub
- URL: https://github.com/MiuLab/SlotGated-SLU
- Owner: MiuLab
- Created: 2018-03-13T05:14:58.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2021-04-06T08:05:30.000Z (over 3 years ago)
- Last Synced: 2024-08-02T16:55:49.054Z (3 months ago)
- Topics: intent-prediction, joint-models, natural-language-understanding, slot-filling, spoken-language-understanding
- Language: Python
- Homepage:
- Size: 418 KB
- Stars: 303
- Watchers: 9
- Forks: 108
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - MiuLab/SlotGated-SLU - gated mechanism)来解决没有明确建立槽位和意图之间联系的缺陷,达到较好的效果。 (实体识别NER、意图识别、槽位填充 / 其他_文本生成、文本对话)
README
# Slot-Gated Modeling for Joint Slot Filling and Intent Prediction
## Reference
Main paper to be cited ([Goo et al., 2018](https://www.csie.ntu.edu.tw/~yvchen/doc/NAACL18_SlotGated.pdf))```
@inproceedings{goo2018slot,
title={Slot-Gated Modeling for Joint Slot Filling and Intent Prediction},
author={Chih-Wen Goo and Guang Gao and Yun-Kai Hsu and Chih-Li Huo and Tsung-Chieh Chen and Keng-Wei Hsu and Yun-Nung Chen},
booktitle={Proceedings of The 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2018}
}
```## Want to Reproduce the experiment?
Enter `--dataset=atis` or `--dataset=snips` to use ATIS or Snips ([Coucke et al., 2018](https://arxiv.org/abs/1805.10190)) dataset.## Where to Put My Dataset?
You need to put your dataset under ./data/ and use `--dataset=foldername`.
For example, your dataset is ./data/mydata, then you need to enter `--dataset=mydata`
Your dataset should be seperated to three folders - train, test, and valid, which is named 'train', 'test', and 'valid' by default setting of train.py.
Each of these folders contain three files - word sequence, slot label, and intent label, which is named 'seq.in', 'seq.out', and 'label' by default setting of train.py.
For example, the full path to train/slot_label_file is './data/mydata/train/seq.out' .
Each line represents an example, and slot label should use the IBO format.
Vocabulary files will be generated by utils.createVocabulary() automatically
You may see ./data/atis for more detail.## Requirements
tensorflow 1.4
python 3.5## Usage
some sample usage
* run with 32 units, atis dataset and no patience for early stop
python3 train.py --num_units=32 --dataset=atis --patience=0* disable early stop, use snips dataset and use intent attention version
python3 train.py --no_early_stop --dataset=snips --model_type=intent_only* use "python3 train.py -h" for all avaliable parameter settings
* Note: must type `--dataset`. If you don't want to use this flag, type `--dataset=''` instead.