https://github.com/lazerlambda/udl-negation
Comparing Data-Driven Techniques for Enhancing Negation Sensitivity in MLM-Based Laguage-Models
https://github.com/lazerlambda/udl-negation
bert computational-linguistics deep-learning deeplearning dl encoder machine-learning masked-language-modeling ml negation nlp nlu python research statistics torch transformers
Last synced: 3 months ago
JSON representation
Comparing Data-Driven Techniques for Enhancing Negation Sensitivity in MLM-Based Laguage-Models
- Host: GitHub
- URL: https://github.com/lazerlambda/udl-negation
- Owner: LazerLambda
- License: mit
- Created: 2023-01-07T23:23:55.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-03-22T14:53:50.000Z (over 2 years ago)
- Last Synced: 2025-03-25T21:25:49.121Z (6 months ago)
- Topics: bert, computational-linguistics, deep-learning, deeplearning, dl, encoder, machine-learning, masked-language-modeling, ml, negation, nlp, nlu, python, research, statistics, torch, transformers
- Language: Jupyter Notebook
- Homepage:
- Size: 5.97 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
UDL-Negation
==============================Code to the report "Comparing Data-Driven Techniques for Enhancing Negation Sensitivity in MLM-Based Laguage-Models".
Transformers in the field of language have shown impressive results in recent years. Despite the overall improvement, these models still lack an understanding of fundamental natural language concepts. One significant problem is the misunderstanding of the concept of negation. This work tested three different techniques to boost the model's understanding capabilities. These approaches include training a BERT-Small model on:
- data with an increased amount of negations (filtered data) with MLM
- filtered data (with MLM) and artificially generated data based on WordNet in an adversarial setting (WordNet adversarial data) by using supervised masking to guide the model in training
- WordNet adversarial data and filtered data using only MLM.# Requirements
To install required libraries, run `pip install -r requirements.txt`.
# Create Data
To create the filtered training data, run the following four commands:
`make owt`
`make bc`
`make wiki`
`make cc_news`Set paths and other configurations in `neg_udl/config/data_config.yaml`
WARNING: These operations might take several days!
WARNING: Undefined behavior experienced when using multiprocessing!# Run Experimeńts
Run experiments running the following commands (CUDA-capable device recommended):
```make exp_1_filtered```
```make exp_2_mlm+sup```
```make exp_3_mlm```
```make exp_3+_mlm```
Set paths and other configurations in `neg_udl/config/exp{1,2,3,3+}_config.yaml`
# Evaluate
To evaluate the trained model on selected GLUE-Tasks, run:
`make evaluate`
# Project Organization
------------├── LICENSE
├── Makefile <- Makefile with all necessary commands to generate data and run experiments.
├── data <- Create after cloning!
│ ├── interim <- Intermediate data that has been transformed.
│ └── processed <- The final data used for experiments.
│
├── models <- Create after cloning! Trained and serialized models, model predictions, or model summaries.
│
├── notebooks <- Jupyter notebooks (for plots)
│
├── reports <- Report
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── evaluation <- Evaluation script for selected GLUE-tasks.
│
└── neg_udl <- Source code for use in this project.--------
Project based on the cookiecutter data science project template. #cookiecutterdatascience