Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/princeton-vl/attach-juxtapose-parser
Code for the paper "Strongly Incremental Constituency Parsing with Graph Neural Networks"
https://github.com/princeton-vl/attach-juxtapose-parser
machine-learning neurips-2020 nlp parsing
Last synced: 6 days ago
JSON representation
Code for the paper "Strongly Incremental Constituency Parsing with Graph Neural Networks"
- Host: GitHub
- URL: https://github.com/princeton-vl/attach-juxtapose-parser
- Owner: princeton-vl
- License: bsd-2-clause
- Created: 2020-10-29T01:13:44.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2023-06-30T20:16:12.000Z (over 1 year ago)
- Last Synced: 2023-12-12T21:46:10.130Z (11 months ago)
- Topics: machine-learning, neurips-2020, nlp, parsing
- Language: Python
- Homepage: https://arxiv.org/abs/2010.14568
- Size: 5.72 MB
- Stars: 31
- Watchers: 6
- Forks: 6
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Strongly Incremental Constituency Parsing with Graph Neural Networks
![Example actions](images/actions.jpg)
Code for the paper:
[Strongly Incremental Constituency Parsing with Graph Neural Networks](https://arxiv.org/abs/2010.14568)
[Kaiyu Yang](https://yangky11.github.io/) and [Jia Deng](https://www.cs.princeton.edu/~jiadeng/)
Neural Information Processing Systems (NeurIPS) 2020```bibtex
@inproceedings{yang2020attachjuxtapose,
title={Strongly Incremental Constituency Parsing with Graph Neural Networks},
author={Yang, Kaiyu and Deng, Jia},
booktitle={Neural Information Processing Systems (NeurIPS)},
year={2020}
}
```## Requirements
1. Make sure your gcc version is at least 5 (`gcc --version`). I encountered segmentation faults with gcc 4.8.5. But if it works for you, it's probably fine.
1. Download and install [Miniconda Python 3](https://docs.conda.io/en/latest/miniconda.html) (Anaconda should also work).
1. `cd` into the root of this repo.
1. Edit [parser.yaml](./parser.yaml) according to your system. For example, remove [- cudatoolkit=10.2](./parser.yaml#L11) if you don't have a GPU. Change the version of cudatoolkit if necessary.
1. Install Python dependencies using conda: `conda env create -f parser.yaml && conda activate parser`. If you have troubles with the aforementioned two steps, you may manually install the packages in [parser.yaml](./parser.yaml) in whatever way that works for you.
1. Compile the [Evalb](https://nlp.cs.nyu.edu/evalb/) program used for evaluation: `cd EVALB && make && cd ..`## Data
We include the preprocessed PTB and CTB data in the [data](./data) directory. No additional data needs to be downloaded. For PTB, we use exactly the same data files as [self-attentive-parser](https://github.com/nikitakit/self-attentive-parser). For CTB, the data files are obtained following [distance-parser](https://github.com/hantek/distance-parser), which is also adopted by [HPSG-Neural-Parser](https://github.com/DoodleJZ/HPSG-Neural-Parser). It basically selects a subset of CTB 8.0 that corresponds to CTB 5.1.
## Training
Use [train.py](./train.py) for training models. By default, `python train.py` trains the parser on PTB using XLNet encoder and graph decoder. It saves training logs and model checkpoints to `./runs/default`. We use [hydra](https://hydra.cc/) to manage command-line arguments. Please refer to [conf/train.yaml](./conf/train.yaml) for the complete list of them. Below are some examples:
* To save results to `./runs/EXPID`, where `EXPID` is an arbitrary experiment identifier:
```bash
python train.py exp_id=EXPID
```* To use BERT instead of XLNet
```bash
python train.py model=ptb_bert_graph
```* To train on CTB using Chinese BERT as the encoder:
```bash
python train.py dataset=ctb model=ctb_bert_graph
```## Results and Pre-trained Models
We provide hyperparameters, training logs and pre-trained models for reproducing our main results (Table 1 and Table 2 in the paper). In the paper, we ran each experiment 5 times with beam search and reported the mean and its standard errors (SEM). Whereas the numbers below are results of 1 run without beam search.
#### Constituency parsing on PTB
| Model | EM | F1 | LP | LR | Hyperparameters | Training log | Pre-trained model |
| ------------- | -------- | ------- | ------- | -------- | --------------- | ------------- | ----------------- |
| Ours (BERT) | 57.41 | 95.80 | 96.01 | 95.59 | [ptb_bert_graph.yaml](./conf/model/ptb_bert_graph.yaml) | [ptb_bert_graph.txt](https://huggingface.co/kaiyuy/attach-juxtapose-parser-ptb-bert/blob/main/ptb_bert_graph.txt) | [ptb_bert_graph.pth](https://huggingface.co/kaiyuy/attach-juxtapose-parser-ptb-bert) |
| Ours (XLNet) | 59.48 | 96.44 | 96.64 | 96.24 | [ptb_xlnet_graph.yaml](./conf/model/ptb_xlnet_graph.yaml) | [ptb_xlnet_graph.txt](https://huggingface.co/kaiyuy/attach-juxtapose-parser-ptb-xlnet/blob/main/ptb_xlnet_graph.txt) | [ptb_xlnet_graph.pth](https://huggingface.co/kaiyuy/attach-juxtapose-parser-ptb-xlnet) |#### Constituency parsing on CTB
| Model | EM | F1 | LP | LR | Hyperparameters | Training log | Pre-trained model |
| ------------- | ---------------- | --------------- | --------------- | ---------------- | --------------- | ------------- | ----------------- |
| Ours (BERT) | 49.43 | 93.52 | 93.66 | 93.38 | [ctb_bert_graph.yaml](./conf/model/ctb_bert_graph.yaml) | [ctb_bert_graph.txt](https://huggingface.co/kaiyuy/attach-juxtapose-parser-ctb-bert/blob/main/ctb_bert_graph.txt) | [ctb_bert_graph.pth](https://huggingface.co/kaiyuy/attach-juxtapose-parser-ctb-bert) |## Evaluation
To evaluate a model checkpoint on PTB:
```bash
python test.py model_path=PATH_TO_MODEL dataset=ptb
````PATH_TO_MODEL` is the path to the `*.pth` file generated by the training script or downloaded from our pre-trained models.
To evaluate on CTB:
```bash
python test.py model_path=PATH_TO_MODEL dataset=ctb
```To evaluate with beam search:
```bash
python test.py model_path=PATH_TO_MODEL dataset=ptb/ctb beam_size=10
```Please refer to [conf/test.yaml](./conf/test.yaml) for the complete list of command-line arguments.
#### Automatic Mixed Precision (AMP) Support
The evaluation script has [amp](https://pytorch.org/docs/stable/amp.html) enabled by default. In our experiments, amp speeds up the evaluation when using GTX 2080 Ti or Quadro RTX 6000, but it makes no difference when using GTX 1080 Ti. You may have to disable it when comparing speed with prior works without amp support.
```bash
python test.py model_path=PATH_TO_MODEL amp=false
```#### GPU memory
We use a batch size of 150 during evaluation to fit our 11 GB GPU memory. Feel free to change it according to your hardware.
```bash
python test.py model_path=PATH_TO_MODEL eval_batch_size=XXX
```## Parsing User-Provided Texts
You can use the attach-juxtapose parser to parse your own sentences.
First, download the [spaCy](https://spacy.io/) models used for tokenization and POS tagging:
```bash
python -m spacy download en_core_web_sm
python -m spacy download zh_core_web_sm
```Then, store the sentences in a text file, one sentence per line. See [input_examples.txt](./input_examples.txt) and [input_examples_chinese.txt](./input_examples_chinese.txt) for examples.
Finally, run the parser from a model checkpoint `PATH_TO_MODEL`, saving the parse trees to a output file, e.g., `output.txt` or `output_chinese.txt`:
```bash
python parse.py model_path=PATH_TO_MODEL input=input_examples.txt output=output.txt
python parse.py language=chinese model_path=PATH_TO_MODEL input=input_examples_chinese.txt output=output_chinese.txt
```## Static Type Checking
The codebase uses [Python 3 type hints](https://docs.python.org/3.6/library/typing.html) extensively. We use [mypy](http://mypy-lang.org/) for static type checking. Run `mypy` to typecheck the entire codebase. [mypy.ini](./mypy.ini) is the configuration file for mypy.
## Credits
* The code for the self-attention layers ([models/utils.py](./models/utils.py)) is based on [self-attentive-parser](https://github.com/nikitakit/self-attentive-parser).
* We include the code of the [Evalb](https://nlp.cs.nyu.edu/evalb/) tool for calculating evaluation metrics.
* The code is formatted using [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black).## Third-Party Implementations
* [SuPar](https://github.com/yzhangcs/parser)