Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/cm-bf/csa
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/cm-bf/csa
- Owner: CM-BF
- Created: 2020-03-22T09:23:30.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2020-06-18T02:37:14.000Z (over 4 years ago)
- Last Synced: 2023-07-14T17:26:31.868Z (over 1 year ago)
- Language: Python
- Size: 11 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# CSA
## Requirement* pytorch 1.2
* torchtext
* [BERT model](https://github.com/ymcui/Chinese-BERT-wwm)
* other NLP packages (please refer to error messages).## Word Vectors Loading
Click [Here](https://ai.tencent.com/ailab/nlp/zh/embedding.html) to download word vectors.
To simple the training script, the path to word vectors file is hard encoded in run.py line 84.## Training
Before running, please locate your current work path at `src` directory.
```bash
$ python run.py
```Flag Examples:
model: CNN/LSTM/TCNN_p, dataset: /A/B/weibo_senti_100k.csv
(I highly recommend you to hard encode `--dataset_dir` in run.py line 33. Datasets are included in datasets directory.)
```bash
--init_lr 0.001 --model CNN/LSTM/TCNN_p --dataset_name weibo_senti_100k.csv --dataset_dir /A/B
```model: BERT, dataset: /A/B/weibo_senti_100k.csv
```bash
--init_lr 5e-5 --model BERT --dataset_name weibo_senti_100k.csv --dataset_dir /A/B
```## Testing
```bash
python run.py
```Flags:
```bash
--test --model CNN/LSTM/BERT/TCNN_p --dataset_name weibo_senti_100k.csv --dataset_dir /A/B
```## interpret
Similarly, changing `--test` to `--interpret` can turn it into interpreting mode.
```bash
--interpret --model CNN/LSTM/BERT/TCNN_p --dataset_name weibo_senti_100k.csv --dataset_dir /A/B
```