Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nyu-mll/GLUE-baselines
[DEPRECATED] Repo for exploring multi-task learning approaches to learning sentence representations
https://github.com/nyu-mll/GLUE-baselines
Last synced: 3 months ago
JSON representation
[DEPRECATED] Repo for exploring multi-task learning approaches to learning sentence representations
- Host: GitHub
- URL: https://github.com/nyu-mll/GLUE-baselines
- Owner: nyu-mll
- Created: 2017-11-02T14:02:38.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2021-08-03T22:12:41.000Z (over 3 years ago)
- Last Synced: 2024-05-21T18:32:07.486Z (8 months ago)
- Language: Python
- Homepage: https://gluebenchmark.com
- Size: 1.44 MB
- Stars: 732
- Watchers: 27
- Forks: 163
- Open Issues: 16
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# GLUE Baselines
This repo contains the code for baselines for the [Generalized Language Understanding Evaluation](https://gluebenchmark.com/) (GLUE) benchmark.
See [our paper](https://openreview.net/pdf?id=rJ4km2R5t7) for more details about GLUE or the baselines.# Deprecation Warning
Use this code to reproduce our baselines. If you want code to use as a starting point for new development, though, we strongly recommend using [jiant](https://github.com/jsalt18-sentence-repl/jiant) instead—it's a much more extensive and much better-documented toolkit built around the same goals.
## Dependencies
Make sure you have installed the packages listed in environment.yml.
When listed, specific particular package versions are required.
If you use conda, you can create an environment from this package with the following command:```
conda env create -f environment.yml
```Note: The version of AllenNLP available on pip may not be compatible with PyTorch 0.4, in which we recommend installing from [source](https://github.com/allenai/allennlp).
## Downloading GLUE
We provide a convenience python script for downloading all GLUE data and standard splits.
```
python download_glue_data.py --data_dir glue_data --tasks all
```After downloading GLUE, point ``PATH_PREFIX`` in ``src/preprocess.py`` to the directory containing the data.
If you are blocked from s3.amazonaws.com (as may be the case in China), downloading MRPC will fail, instead you can run the command below:
```
git clone https://github.com/wasiahmad/paraphrase_identification.git
python download_glue_data.py --data_dir glue_data --tasks all --path_to_mrpc=paraphrase_identification/dataset/msr-paraphrase-corpus
```## Running
To run our baselines, use ``src/main.py``.
Because preprocessing is expensive (particularly for ELMo) and we often want to run multiple experiments using the same preprocessing, we use an argument ``--exp_dir`` for sharing preprocessing between experiments. We use argument ``--run_dir`` to save information specific to a particular run, with ``run_dir`` usually nested within ``exp_dir``.```
python main.py --exp_dir EXP_DIR --run_dir RUN_DIR --train_tasks all --word_embs_file PATH_TO_GLOVE
```NB: The version of AllenNLP used has [issues](https://github.com/allenai/allennlp/issues/342) with tensorboard. You may need to substitute calls ``from tensorboard import SummaryWriter`` to ``from tensorboardX import SummaryWriter`` in your AllenNLP source files.
## GloVe, CoVe, and ELMo
Many of our models make use of [GloVe pretrained word embeddings](https://nlp.stanford.edu/projects/glove/), in particular the 300-dimensional, 840B version.
To use GloVe vectors, download and extract the relevant files and set ``word_embs_file`` to the GloVe file.
To learn embeddings from scratch, set ``--glove`` to 0.We use the CoVe implementation provided [here](https://github.com/salesforce/cove).
To use CoVe, clone the repo and fill in ``PATH_TO_COVE`` in ``src/models.py`` and set ``--cove`` to 1.We use the ELMo implementation provided by [AllenNLP](https://github.com/allenai/allennlp/blob/master/tutorials/how_to/elmo.md).
To use ELMo, set ``--elmo`` to 1. To use ELMo without GloVe, additionally set ``--elmo_no_glove`` to 1.## Reference
If you use this code or GLUE, please consider citing us.
```
@unpublished{wang2018glue
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for
Natural Language Understanding}
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill,
Felix and Levy, Omer and Bowman, Samuel R.}
note={arXiv preprint 1804.07461}
year={2018}
}
```Feel free to contact alexwang _at_ nyu.edu with any questions or comments.