Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Alibaba-NLP/SeqGPT
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding
https://github.com/Alibaba-NLP/SeqGPT
Last synced: about 1 month ago
JSON representation
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding
- Host: GitHub
- URL: https://github.com/Alibaba-NLP/SeqGPT
- Owner: Alibaba-NLP
- License: apache-2.0
- Created: 2023-08-21T03:46:42.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-12-13T13:37:21.000Z (about 1 year ago)
- Last Synced: 2024-08-03T09:07:03.264Z (5 months ago)
- Language: Python
- Homepage:
- Size: 715 KB
- Stars: 203
- Watchers: 4
- Forks: 10
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - Alibaba-NLP/SeqGPT
README
## An Out-of-the-box Large Language Model for Open Domain Sequence Understanding
Tianyu Yu*, Chengyue Jiang*, Chao Lou*, Shen Huang*, Xiaobin Wang, Wei Liu, Jiong Cai, Yangning Li, Yinghui Li, Kewei Tu, Hai-Tao Zheng, Ningyu Zhang, Pengjun Xie, Fei Huang, Yong Jiangβ
DAMO Academy, Alibaba Group
*Equal Contribution; β Corresponding Author[![license](https://img.shields.io/github/license/Alibaba-NLP/SeqGPT)](./LICENSE)
[![paper](https://img.shields.io/badge/arXiv-2308.10529-red)](https://arxiv.org/abs/2308.10529)## Spotlights
* A bilingual model (English and Chinese) specially enhanced for open-domain NLU.
* Trained with diverse synthesized data and high-quality NLU dataset.
* Handle all NLU tasks that can be transformed into a combination of atomic tasks, classification and extraction.## π° Update News
`SeqGPT` is continuously updating. We have provided online demos for everyone. In the future, we will provide new versions of models with upgraded capabilities. Please continue to pay attention!
- **[2023/10/09]** πͺ We provide [API](https://help.aliyun.com/zh/dashscope/developer-reference/opennlu-api-details) of SeqGPT-3B for users who want to access **larger** SeqGPT.
- **[2023/09/20]** ποΈ We provide a sample script for fine-tuning on a custom dataset at [here](https://github.com/modelscope/swift/blob/main/examples/pytorch/llm/scripts/seqgpt_560m/full/sft.sh).
- **[2023/08/23]** π οΈ We release the weight of SeqGPT-560M at both [Modelscope](https://www.modelscope.cn/models/damo/nlp_seqgpt-560m) and [Hugging Face](https://huggingface.co/DAMO-NLP/SeqGPT-560M). You can download and inference with our model simply following the [usage case](#inference).
- **[2023/08/23]** π₯ We provide an online demo of SeqGPT at [Modelscope](https://www.modelscope.cn/studios/TTCoding/open_ner/summary)! Try it now!
- **[2023/08/21]** π We release the paper of SeqGPT: [SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding](https://arxiv.org/abs/2308.10529). More implementation details and experimental results are presented in the paper.## Performance
We perform a human evaluation on SeqGPT-7B1 and ChatGPT using the held-out datasets. Ten annotators are tasked to decide which model gives the better answer or two models are tied with each other. SeqGPT-7B1 outperforms ChatGPT on 7/10 NLU tasks but lags behind in sentiment analysis (SA), slot filling (SF) and natural language inference (NLI).
## Usage
### Install
```sh
conda create -n seqgpt python==3.8.16conda activate seqgpt
pip install -r requirements.txt
```### Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
import torchmodel_name_or_path = 'DAMO-NLP/SeqGPT-560M'
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path)
tokenizer.padding_side = 'left'
tokenizer.truncation_side = 'left'if torch.cuda.is_available():
model = model.half().cuda()
model.eval()
GEN_TOK = '[GEN]'while True:
sent = input('θΎε ₯/Input: ').strip()
task = input('εη±»/classify press 1, ζ½ε/extract press 2: ').strip()
labels = input('ζ ηΎι/Label-Set (e.g, labelA,LabelB,LabelC): ').strip().replace(',', 'οΌ')
task = 'εη±»' if task == '1' else 'ζ½ε'# Changing the instruction can harm the performance
p = 'θΎε ₯: {}\n{}: {}\nθΎεΊ: {}'.format(sent, task, labels, GEN_TOK)
input_ids = tokenizer(p, return_tensors="pt", padding=True, truncation=True, max_length=1024)
input_ids = input_ids.to(model.device)
outputs = model.generate(**input_ids, num_beams=4, do_sample=False, max_new_tokens=256)
input_ids = input_ids.get('input_ids', input_ids)
outputs = outputs[0][len(input_ids[0]):]
response = tokenizer.decode(outputs, skip_special_tokens=True)
print('BOT: ========== \n{}'.format(response))
```## Citation
If you found this work useful, consider giving this repository a star and citing our paper as followed:
```
@misc{yu2023seqgpt,
title={SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding},
author={Tianyu Yu and Chengyue Jiang and Chao Lou and Shen Huang and Xiaobin Wang and Wei Liu and Jiong Cai and Yangning Li and Yinghui Li and Kewei Tu and Hai-Tao Zheng and Ningyu Zhang and Pengjun Xie and Fei Huang and Yong Jiang},
year={2023},
eprint={2308.10529},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```