An open API service indexing awesome lists of open source software.

https://github.com/tikquuss/meta_xlm

Cross-lingual Language Model (XLM) pretraining and Model-Agnostic Meta-Learning (MAML) for fast adaptation of deep networks
https://github.com/tikquuss/meta_xlm

african-languages back-translation bert bleu-scores bpe-codes clm denoising-autoencoders languages machine-translation maml meta-model mlm parallel-training pretrained-models text-clustering text-processing tlm xlm

Last synced: about 1 month ago
JSON representation

Cross-lingual Language Model (XLM) pretraining and Model-Agnostic Meta-Learning (MAML) for fast adaptation of deep networks

Awesome Lists containing this project

README

        

```
@misc{
pascal2021on,
title={On the use of linguistic similarities to improve Neural Machine Translation for African Languages},
author={Tikeng Notsawo Pascal and NANDA ASSOBJIO Brice Yvan and James Assiene},
year={2021},
url={https://openreview.net/forum?id=Q5ZxoD2LqcI}
}
```

## I. Cross-lingual language model pretraining ([XLM](https://github.com/facebookresearch/XLM))

XLM supports multi-GPU and multi-node training, and contains code for:
- **Language model pretraining**:
- **Causal Language Model** (CLM)
- **Masked Language Model** (MLM)
- **Translation Language Model** (TLM)
- **GLUE** fine-tuning
- **XNLI** fine-tuning
- **Supervised / Unsupervised MT** training:
- Denoising auto-encoder
- Parallel data training
- Online back-translation

#### Dependencies

- Python 3
- [NumPy](http://www.numpy.org/)
- [PyTorch](http://pytorch.org/) (currently tested on version 0.4 and 1.0)
- [fastBPE](https://github.com/facebookresearch/XLM/tree/master/tools#fastbpe) (generate and apply BPE codes)
- [Moses](https://github.com/facebookresearch/XLM/tree/master/tools#tokenizers) (scripts to clean and tokenize text only - no installation required)
- [Apex](https://github.com/nvidia/apex#quick-start) (for fp16 training)

### Pretrained models

Machine Translation BLEU scores. The rows correspond to the pairs of interest on which
BLEU scores are reported. The column None is a baseline : it represents the BLEU score of a
model trained on the pair without any MLM or TLM pre-training. The column Pair is a baseline :
it represents the BLEU score of a model trained on the pair with MLM and TLM pre-training. The
column Random is also a baseline : it is the BLEU score of a 3 languages multi-task model where
the language added was chosen purely at random. The column Historical refers to the BLEU score
of our 3 languages multi-task model where the language added was chosen using clusters historicaly identified. The column LM describes the BLEU score of our 3 languages, multi-task model where the
language added was chosen using the LM similarity



Pretraining
None
Pair
Random
Historical
LM




Bafia-Bulu
09.19
12.58
23.52
28.81
13.03


Bulu-Bafia
13.50
15.15
24.76
32.83
13.91


Bafia-Ewondo
09.30
11.28
08.28
38.90
38.90


Ewondo-Bafia
13.99
16.07
10.26
35.84
35.84


Bulu-Ewondo
10.27
12.11
11.82
39.12
34.86


Ewondo-Bulu
11.62
14.42
12.27
34.91
30.98


Guidar-Guiziga
11.95
15.05
Random
Historical
LM


Guiziga-Guidar
08.05
08.94
Random
Historical
LM


Guiziga-Mofa
17.78
21.67
Random
Historical
LM


Mofa-Guiziga
12.02
15.41
Random
Historical
LM


Guidar-Kapsiki
14.74
17.78
Random
Historical
LM


Kapsiki-Guidar
08.63
09.33
Random
Historical
LM


French-Bulu
19.91
23.47
Random
25.06
LM


Bulu-French
17.49
22.44
Random
23.68
LM


French-Bafia
14.48
15.35
Random
30.65
LM


Bafia-French
08.59
11.17
Random
24.49
LM


French-Ewondo
11.51
13.93
Random
35.50
LM


Ewondo-French
10.60
13.77
Random
27.34
LM

## II. Model-Agnostic Meta-Learning ([MAML](https://arxiv.org/abs/1911.02116))

See [maml](https://github.com/cbfinn/maml), [learn2learn](https://github.com/learnables/learn2learn)...

See [HowToTrainYourMAMLPytorch](https://github.com/AntreasAntoniou/HowToTrainYourMAMLPytorch) for a replication of the paper ["How to train your MAML"](https://arxiv.org/abs/1810.09502), along with a replication of the original ["Model Agnostic Meta Learning"](https://arxiv.org/abs/1703.03400) (MAML) paper.

## III. Train your own (meta-)model

**Open the illustrative notebook in colab**[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Tikquuss/meta_XLM/blob/master/notebooks/demo/tuto.ipynb)

**Note** : Most of the bash scripts used in this repository were written on the windows operating system, and can generate this [error](https://prograide.com/pregunta/5588/configure--bin--sh--m-mauvais-interpreteur) on linux platforms.
This problem can be corrected with the following command:
```
filename=my_file.sh
cat $filename | tr -d '\r' > $filename.new && rm $filename && mv $filename.new $filename
```
### 1. Preparing the data

At this level, if you have pre-processed binary data in pth format (for example from XLM experimentation or improvised by yourself), group them in a specific folder that you will mention as a parameter by calling the script [train.py](XLM/train.py).
If this is not the case, we assume that you have txt files available for preprocessing. Look at the following example for which we have three translation tasks: `English-French, German-English and German-French`.

We have the following files available for preprocessing:
```
- en-fr.en.txt and en-fr.fr.txt
- de-en.de.txt and de-en.en.txt
- de-fr.de.txt and de-fr.fr.txt
```
All these files must be in the same folder (`PARA_PATH`).
You can also (only or optionally) have monolingual data available (`en.txt, de.txt and fr.txt`; in `MONO_PATH` folder).
Parallel and monolingual data can all be in the same folder.

**Note** : Languages must be submitted in alphabetical order (`de-en and not en-de, fr-ru and not ru-fr...`). If you submit them in any order you will have problems loading data during training, because when you run the [train.py](XLM/train.py) script the parameters like the language pair are put back in alphabetical order before being processed. Don't worry about this alphabetical order restriction, XLM for MT is naturally trained to translate sentences in both directions. See [translate.py](scripts/translate.py).

[OPUS collections](http://opus.nlpl.eu/) is a good source of dataset. We illustrate in the [opus.sh](scripts/opus.sh) script how to download the data from opus and convert it to a text file.
Changing parameters ($PARA_PATH and $SRC) in [opus.sh](scripts/opus.sh).
```
cd meta_XLM
chmod +x ./scripts/opus.sh
./scripts/opus.sh de-fr
```

Another source for `other_languages-english` data is [anki Tab-delimited Bilingual Sentence Pairs](http://www.manythings.org/anki/). Simply download the .zip file, unzip to extract the `other_language.txt` file. This file usually contains data in the form of `sentence_en sentence_other_language other_information` on each line. See [anki.py](scripts/anki.py) and [anky.sh](scripts/anki.sh) in relation to a how to extract data from [anki](http://www.manythings.org/anki/). Example of how to download and extract `de-en` and `en-fr` pair data.
```
cd meta_XLM
output_path=/content/data/para
mkdir $output_path
chmod +x ./scripts/anki.sh
./scripts/anki.sh de,en deu-eng $output_path scripts/anki.py
./scripts/anki.sh en,fr fra-eng $output_path scripts/anki.py
```
After that you will have in `data/para` following files : `de-en.de.txt, de-en.en.txt, deu.txt, deu-eng.zip and _about.txt`

Move to the `XLM` folder in advance.
```
cd XLM
```
Install the following dependencies ([fastBPE](https://github.com/facebookresearch/XLM/tree/master/tools#fastbpe) and [Moses](https://github.com/facebookresearch/XLM/tree/master/tools#tokenizers)) if you have not already done so.
```
git clone https://github.com/moses-smt/mosesdecoder tools/mosesdecoder
git clone https://github.com/glample/fastBPE tools/fastBPE && cd tools/fastBPE && g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast
```

Changing parameters in [data.sh](data.sh). Between lines 94 and 100 of [data.sh](data.sh), you have two options corresponding to two scripts to execute according to the distribution of the folders containing your data. Option 2 is chosen by default, kindly uncomment the lines corresponding to your option.
With too many BPE codes (depending on the size of the dataset) you may get this [error](https://github.com/glample/fastBPE/issues/7). Decrease the number of codes (e.g. you can dichotomously search for the appropriate/maximum number of codes that make the error disappear)

```
languages=de,en,fr
chmod +x ../data.sh
../data.sh $languages
```

If you stop the execution when processing is being done on a file please delete this erroneous file before continuing or restarting the processing, otherwise the processing will continue with this erroneous file and potential errors will certainly occur.

After this you will have the following (necessary) files in `$OUTPATH` (and `$OUTPATH/fine_tune` depending on the parameter `$sub_task`):

```
- monolingual data :
- training data : train.fr.pth, train.en.pth and train.de.pth
- test data : test.fr.pth, test.en.pth and test.de.pth
- validation data : valid.fr.pth, valid.en.pth and valid.de.pth
- parallel data :
- training data :
- train.en-fr.en.pth and train.en-fr.fr.pth
- train.de-en.en.pth and train.de-en.de.pth
- train.de-fr.de.pth and train.de-fr.fr.pth
- test data :
- test.en-fr.en.pth and test.en-fr.fr.pth
- test.de-en.en.pth and test.de-en.de.pth
- test.de-fr.de.pth and test.de-fr.fr.pth
- validation data
- valid.en-fr.en.pth and valid.en-fr.fr.pth
- valid.de-en.en.pth and valid.de-en.de.pth
- valid.de-fr.de.pth and valid.de-fr.fr.pth
- code and vocab
```
To use the biblical corpus, run [bible.sh](bible.sh) instead of [data.sh](data.sh). Here is the list of languages available (and to be specified as `$languages` value) in this case :
- **Languages with data in the New and Old Testament** : `Francais, Anglais, Fulfulde_Adamaoua or Fulfulde_DC (formal name : Fulfulde), Bulu, KALATA_KO_SC_Gbaya or KALATA_KO_DC_Gbaya (formal name : Gbaya), BIBALDA_TA_PELDETTA (formal name : MASSANA), Guiziga, Kapsiki_DC (formal name : Kapsiki), Tupurri`.
- **Languages with data in the New Testament only** : `Bafia, Ejagham, Ghomala, MKPAMAN_AMVOE_Ewondo (formal name : Ewondo), Ngiemboon, Dii, Vute, Limbum, Mofa, Mofu_Gudur, Doyayo, Guidar, Peere_Nt&Psalms, Samba_Leko, Du_na_sdik_na_wiini_Alaw`.
It is specified in [bible.sh](bible.sh) that you must have in `csv_path` a folder named csvs. Here is the [drive link](https://drive.google.com/file/d/1NuSJ-NT_BsU1qopLu6avq6SzUEf6nVkk/view?usp=sharing) of its zipped version.
Concerning training, specify the first four letters of each language (`Bafi` instead of `Bafia` for example), except `KALATA_KO_SC_Gbaya/KALATA_KO_DC_Gbaya which becomes Gbay (first letters of Gbaya), BIBALDA_TA_PELDETTA which becomes MASS (first letters of MASSANA), MKPAMAN_AMVOE_Ewondo which becomes Ewon (first letters of Ewondo), Francais and Anglais which becomes repectively fr and en`. Indeed, [bible.sh](bible.sh) uses these abbreviations to create the files and not the language names themselves.
One last thing in the case of the biblical corpus is that when only one language is to be specified, it must be specified twice. For example: `languages=Bafia,Bafia` instead of `languages=Bafia`.

### 2. Pretrain a language (meta-)model

Install the following dependencie ([Apex](https://github.com/nvidia/apex#quick-start)) if you have not already done so.
```
git clone https://github.com/NVIDIA/apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex
```

Instead of passing all the parameters of train.py, put them in a json file and specify the path to this file in parameter (See [lm_template.json](configs/lm_template.json) file for more details).
```
config_file=../configs/lm_template.json
python train.py --config_file $config_file
```
If you pass a parameter by calling the script [train.py](XLM/train.py) (example: `python train.py --config_file $config_file --data_path my/data_path`), it will overwrite the one passed in `$config_file`.
Once the training is finished you will see a file named `train.log` in the `$dump_path/$exp_name/$exp_id` folder information about the training. You will find in this same folder your checkpoints and best model.
When `"mlm_steps":"..."`, train.py automatically uses the languages to have `"mlm_steps":"de,en,fr,de-en,de-fe,en-fr"` (give a precise value to mlm_steps if you don't want to do all MLM and TLM, example : `"mlm_steps":"en,fr,en-fr"`). This also applies to `"clm_steps":"..."` which deviates `"clm_steps":"de,en,fr"` in this case.

Note :
-`en` means MLM on `en`, and requires the following three files in `data_path`: `a.en.pth, a ∈ {train, test, valid} (monolingual data)`
-`en-fr` means TLM on `en and fr`, and requires the following six files in `data_path`: `a.en-fr.b.pth, a ∈ {train, test, valid} and b ∈ {en, fr} (parallel data)`
-`en,fr,en-fr` means MLM+TLM on `en, fr, en and fr`, and requires the following twelve files in `data_path`: `a.b.pth and a.en-fr.b.pth, a ∈ {train, test, valid} and b ∈ {en, fr}`

To [train with multiple GPUs](https://github.com/facebookresearch/XLM#how-can-i-run-experiments-on-multiple-gpus) use:
```
export NGPU=8; python -m torch.distributed.launch --nproc_per_node=$NGPU train.py --config_file $config_file
```

**Tips**: Even when the validation perplexity plateaus, keep training your model. The larger the batch size the better (so using multiple GPUs will improve performance). Tuning the learning rate (e.g. [0.0001, 0.0002]) should help.

In the case of metalearning, you just have to specify your meta-task separated by `|` in `lgs` and `objectives (clm_steps, mlm_steps, ae_steps, mt_steps, bt_steps and pc_steps)`.
For example, if you only want to do metalearning (without doing XLM) in our case, you have to specify these parameters: `"lgs":"de-en|de-fr|en-fr"`, `"clm_steps":"...|...|..."` and/or `"mlm_steps":"...|...|..."`. These last two parameters, if specified as such, will become respectively `"clm_steps":"de,en|de,fr|en,fr"` and/or `"mlm_steps":"de,en,de-en|de,fr,de-fr|en,fr,en-fr"`.
The passage of the three points follows the same logic as above. That is to say that if at the level of the meta-task `de-en`:
- we only want to do MLM (without TLM): `mlm_steps` becomes `"mlm_steps": "de,en|...|..."`
- we don't want to do anything: `mlm_steps` becomes `"mlm_steps":"|...|..."`.

It is also not allowed to specify a meta-task that has no objective. In our case, `"clm_steps":"...||..."` and/or `"mlm_steps":"...||..."` will generate an exception, in which case the meta-task `de-fr` (second task) has no objective.

If you want to do metalearning and XLM simultaneously :
- `"lgs":"de-en-fr|de-en-fr|de-en-fr"`
- Follow the same logic as described above for the other parameters.

###### Description of some essential parameters

```
## main parameters
exp_name # experiment name
exp_id # Experiment ID
dump_path # where to store the experiment (the model will be stored in $dump_path/$exp_name/$exp_id)

## data location / training objective
data_path # data location
lgs # considered languages/meta-tasks
clm_steps # CLM objective
mlm_steps # MLM objective

## transformer parameters
emb_dim # embeddings / model dimension
n_layers # number of layers
n_heads # number of heads
dropout # dropout
attention_dropout # attention dropout
gelu_activation # GELU instead of ReLU

## optimization
batch_size # sequences per batch
bptt # sequences length
optimizer # optimizer
epoch_size # number of sentences per epoch
max_epoch # Maximum epoch size
validation_metrics # validation metric (when to save the best model)
stopping_criterion # end experiment if stopping criterion does not improve

## dataset
#### These three parameters will always be rounded to an integer number of batches, so don't be surprised if you see different values than the ones provided.
train_n_samples # Just consider train_n_sample train data
valid_n_samples # Just consider valid_n_sample validation data
test_n_samples # Just consider test_n_sample test data for
#### If you don't have enough RAM/GPU or swap memory, leave these three parameters to True, otherwise you may get an error like this when evaluating :
###### RuntimeError: copy_if failed to synchronize: cudaErrorAssert: device-side assert triggered
remove_long_sentences_train # remove long sentences in train dataset
remove_long_sentences_valid # remove long sentences in valid dataset
remove_long_sentences_test # remove long sentences in test dataset
```

###### There are other parameters that are not specified here (see [train.py](XLM/train.py))

### 3. Train a (unsupervised/supervised) MT from a pretrained meta-model

See [mt_template.json](configs/mt_template.json) file for more details.
```
config_file=../configs/mt_template.json
python train.py --config_file $config_file
```

When the `ae_steps` and `bt_steps` objects alone are specified, this is unsupervised machine translation, and only requires monolingual data. If the parallel data is available, give `mt_step` a value based on the language pairs for which the data is available.
All comments made above about parameter passing and metalearning remain valid here : if you want to exclude a meta-task in an objective, put a blank in its place. Suppose, in the case of metalearning, we want to exclude from `"ae_steps":"en,fr|en,de|de,fr"` the meta-task:
- `de-en` : `ae_steps` becomes `"ae_steps":"en,fr||de,fr"`
- `de-fr` : `ae_steps` becomes `"ae_steps":"en,fr|de,en|"`

###### Description of some essential parameters
The description made above remains valid here
```
## main parameters
reload_model # model to reload for encoder,decoder
## data location / training objective
ae_steps # denoising auto-encoder training steps
bt_steps # back-translation steps
mt_steps # parallel training steps
word_shuffle # noise for auto-encoding loss
word_dropout # noise for auto-encoding loss
word_blank # noise for auto-encoding loss
lambda_ae # scheduling on the auto-encoding coefficient

## transformer parameters
encoder_only # use a decoder for MT

## optimization
tokens_per_batch # use batches with a fixed number of words
eval_bleu # also evaluate the BLEU score
```
###### There are other parameters that are not specified here (see [train.py](XLM/train.py))

### 4. case of metalearning : optionally fine-tune the meta-model on a specific (sub) nmt (meta) task

At this point, if your fine-tuning data did not come from the previous pre-processing, you can just prepare your txt data and call the script build_meta_data.sh with the (sub) task in question. Since the codes and vocabulary must be preserved, we have prepared another script ([build_fine_tune_data.sh](scripts/build_fine_tune_data.sh)) in which we directly apply BPE tokenization on dataset and binarize everything using preprocess.py based on the codes and vocabulary of the meta-model. So we have to call this script for each subtask like this :

```
languages =
chmod +x ../ft_data.sh
../ft_data.sh $languages
```

At this stage, restart the training as in the previous section with :
- lgs="en-fr"
- reload_model = path to the folder where you stored the meta-model
- `bt_steps'':"..."`, `ae_steps'':"..."` and/or `mt_steps'':"..."` (replace the three bullet points with your specific objectives if any)
You can use one of the two previously trained meta-models: pre-formed meta-model (MLM, TLM) or meta-MT formed from the pre-formed meta-model.

### 5. How to evaluate a language model trained on a language L on another language L'.

###### Our

?Evaluated on (cols)---------
Trained on (rows)BafiBuluEwonGhomLimbNgieDiiDoyaPeerSambGuidGuizKapsMofaMofuDu_nEjagFulfGbayMASSTupuVuteBafi15.155782/46.1139903522.435230/12.69430110532.574414/3.1088083414.970521/10.1036273662.233924/10.8808294476.028980/2.0725394594.588311/10.3626943840.575574/13.9896373111.148085/13.2124354210.511141/8.0310886607.939683/2.5906747506.246899/3.10880811121.594025/3.3678763122.591005/13.2124353183.283705/10.6217625504.065998/8.5492234127.620979/3.1088089107.779213/6.9948197440.762805/3.8860104916.778213/12.1761668239.932584/4.9222803192.590598/10.362694Bulu577.711688/9.58549218.602898/43.264249795.094593/17.357513589.636415/13.4715031482.709434/8.5492231113.122905/12.435233994.030274/11.658031820.063393/10.103627828.162228/11.6580311519.449874/3.3678761183.604483/9.326425671.542857/13.9896371427.515245/5.440415657.031222/13.2124351018.342338/6.217617602.305603/10.8808291066.765090/6.9948191349.669421/6.476684605.298410/13.9896371615.328636/5.6994822493.141092/8.290155699.009937/13.730570Ewon2930.433348/13.730570784.556467/12.435233439.343693/11.1398968576.270483/3.8860101408.305834/12.1761666329.517824/5.1813474374.527024/8.0310885703.222147/4.9222803226.438808/13.4715035147.417352/9.5854927383.547206/3.8860102049.974847/13.7305703458.765537/12.1761661428.351000/11.1398964890.406327/1.8134722050.215975/11.9170984693.132443/2.3316063796.911033/9.8445604985.892435/7.2538863737.211837/11.6580318497.461052/1.0362698105.614715/2.590674Ghom10826.769423/12.1761667919.745037/10.62176213681.624683/6.735751112.759549/22.5388608550.764036/13.21243521351.213307/11.6580315724.234345/11.9170987638.186054/10.6217628992.791640/6.7357519870.502751/5.4404158671.271306/14.2487057952.305962/9.84456017073.248866/7.25388617507.383398/3.6269436253.188979/12.4352336616.060359/9.58549231826.000072/3.10880811636.816092/11.3989646129.150512/14.5077729667.854370/11.13989614276.187678/8.0310887152.109226/12.953368Limb2348.605310/7.7720215910.088736/10.10362711640.836610/2.3316062234.982947/8.03108816.486114/48.1865285240.029343/10.8808293485.743598/11.1398961744.289850/10.8808292357.786346/11.6580312829.453145/10.3626946097.658965/6.7357512806.032546/9.3264252530.422427/11.1398962234.037369/14.5077723106.412553/9.0673585675.990382/8.5492234323.215519/10.8808295303.094881/7.5129533222.476499/10.3626942619.771393/12.4352336315.916126/12.4352331965.282639/9.326425Ngie2494.668579/10.6217621683.610004/7.772021645.074490/13.2124352747.857945/10.621762865.229192/8.03108853.604331/32.6424873487.877577/5.4404152973.100164/9.8445601694.041692/9.8445602285.872589/8.8082903555.658122/3.6269432240.803918/4.6632128214.745127/2.8497412162.964776/8.2901554130.931993/5.6994821251.907556/9.5854921406.624816/6.7357511134.593481/8.0310883484.481404/9.8445601587.951832/9.3264251786.015603/9.3264252117.031454/10.103627Dii5369.974508/5.1813473526.951377/11.9170984466.736657/2.5906743468.181916/8.8082901524.457754/10.880829856.533233/10.36269416.031832/47.1502593570.945172/11.6580311933.128270/11.1398963086.805425/7.2538865545.945984/3.6269431592.451661/11.1398967351.154713/2.3316061430.511351/14.2487054198.900876/4.1450782587.338616/8.2901553315.158358/2.5906742903.721453/8.8082904416.753252/3.8860103044.769713/5.4404153276.637193/10.3626943551.309415/8.808290Doya2413.178389/7.2538862925.237118/9.3264253035.126064/9.8445606431.020717/4.4041452888.802299/10.3626944296.348738/9.5854921963.357861/9.067358225.399738/14.5077722647.241446/4.6632123559.797389/1.0362693224.327707/8.5492231628.560179/16.0621767036.636934/2.0725392378.384535/7.7720212526.667089/10.1036272560.562728/10.3626943486.425933/7.2538864898.016349/6.2176171336.163366/12.1761665378.777228/0.5181352334.347220/9.5854924210.426671/6.476684Peer5417.812131/7.2538863718.857566/8.2901553921.429577/10.1036278042.333854/2.5906744744.329113/12.4352332378.606152/7.7720214297.265443/7.2538867835.525318/3.10880827.612503/46.1139908547.481994/3.3678767819.217930/4.9222802009.553562/13.7305707929.664487/2.5906745227.466016/3.1088082828.595071/10.1036273109.933571/11.3989643449.171674/7.5129537517.809582/5.1813473593.460649/9.3264256490.444215/5.1813478583.548031/6.9948193640.649700/9.585492Samb1921.203126/10.6217622876.156252/8.8082905222.268404/2.3316062258.419159/8.8082902940.603464/9.844560757.885957/10.3626942852.564926/3.8860103568.046199/9.5854923198.132105/11.65803114.473909/45.3367882135.946491/9.3264251882.791510/12.4352331380.449126/12.6943012739.728389/6.2176171114.151589/13.9896372588.952886/10.3626942408.673909/9.8445601012.804391/13.4715034310.704371/6.2176172429.426652/3.1088081681.603952/7.7720212305.207465/4.404145Guid11105.869490/11.91709811350.393050/8.54922324157.732815/2.33160628800.139343/5.4404159497.473893/11.13989611941.642599/11.65803126891.060403/2.07253935288.834478/3.36787611458.390164/9.3264258581.012321/12.953368669.152371/22.0207258237.415053/12.95336824641.309182/3.62694312256.261503/6.7357518329.239657/15.02590718733.469719/2.59067413013.633062/11.39896422151.485850/4.92228015139.079118/12.17616612649.997596/11.13989613526.708187/9.84456014521.723680/13.471503Guiz1900.984819/11.9170983422.299591/5.4404152920.779863/13.2124352657.232975/3.8860107763.772745/6.2176172516.088934/11.3989641556.474440/12.9533681450.939238/12.6943011852.263760/12.4352333503.139397/5.4404151957.981930/7.7720215.612643/60.3626942030.975178/10.6217623100.456750/9.5854923816.057439/9.0673582527.372931/10.1036272017.135324/9.5854921771.010720/12.9533682467.262902/9.0673586465.542228/6.7357514936.521836/5.1813473251.372451/4.663212Kaps4787.151015/7.7720214026.495938/9.0673582591.212157/13.7305703963.789278/11.1398964835.168698/9.8445603738.018788/5.9585493472.599548/9.0673582846.824328/9.0673583964.442923/6.2176178248.174848/4.6632123178.776910/9.3264254521.187784/6.4766846.392693/63.7305704535.673748/6.4766842285.708359/13.7305705222.426332/5.6994824409.982716/5.4404152124.534904/10.3626944863.209844/10.3626944875.780156/3.8860104278.744225/12.1761664661.710772/9.067358Mofa5555.267163/7.7720215328.793555/11.6580316064.913246/13.7305708844.481560/5.18134714355.051790/6.21761710773.098216/8.2901555702.554716/11.39896411819.967712/5.9585495810.652609/12.43523310899.166334/6.4766849606.038800/5.6994824528.077873/13.47150310261.988658/9.84456038.718690/38.3419697191.371927/8.2901554847.594375/14.2487058110.295270/9.84456014375.814958/5.69948210070.806870/3.62694310826.318474/8.29015510187.374717/7.77202116953.170797/3.626943Mofu2175.168540/11.6580313005.393159/10.6217622773.793897/7.2538862257.313709/6.4766841807.203325/13.4715032481.194623/2.3316061626.688315/12.4352331473.207901/13.2124353206.638463/8.2901551358.112972/12.4352332550.513183/10.8808291867.275865/12.6943012847.897967/4.1450781645.699003/13.47150350.399227/32.6424873831.820284/3.1088081679.421861/9.8445601957.944241/13.9896371655.398024/13.2124353439.753108/6.7357514164.392749/9.8445602176.478824/10.103627Du_n3358.977688/12.6943018269.025689/5.9585496784.926221/4.9222804034.987828/10.3626948317.977821/5.4404154469.988388/9.3264254581.242219/9.5854924046.289387/10.8808294587.843666/10.8808294061.430238/12.4352334116.231632/8.0310884043.687467/11.6580318587.884922/5.6994822518.760103/13.9896379252.838415/6.21761738.646292/34.1968912823.000209/11.6580317688.259347/5.6994824184.395191/9.8445606460.323149/9.84456012418.880207/5.6994824394.753911/10.362694Ejag878.221181/8.2901552977.854246/10.3626941122.454274/13.2124354066.806240/3.6269434401.408293/12.6943011324.839235/11.1398962760.972117/9.585492802.718089/8.8082901935.328428/6.7357512456.134064/8.549223948.726346/11.6580311464.326862/6.9948191999.633312/6.4766842483.815842/4.663212790.752998/11.9170981436.471564/10.36269427.125567/39.8963732701.314483/8.549223739.895562/13.9896371119.207373/9.8445602061.967307/3.3678763116.635849/4.663212Fulf3122.754082/11.1398963172.412810/8.2901552632.034499/10.1036271803.237123/14.5077723015.507576/12.9533684697.430105/10.6217622221.398811/11.9170983338.511704/7.7720215857.163684/4.6632122631.329961/12.6943011756.767457/14.2487053965.216351/8.0310882961.580251/10.3626941850.532804/14.2487052431.677037/8.8082902688.040706/8.5492236237.846441/3.1088089.819160/53.1088081794.314668/12.4352332633.154009/4.9222805899.732260/9.5854926035.594459/5.440415Gbay3537.010215/8.8082902213.336729/9.326425958.976958/14.7668392170.105117/2.8497412381.840897/8.5492231092.011356/11.398964989.079405/15.2849742110.708219/12.9533681212.493865/13.9896371342.159428/12.953368784.478130/16.3212441404.757907/15.2849741949.759014/13.7305701165.979838/12.6943011940.255308/5.6994821073.951745/13.7305702180.263932/7.2538862639.229412/8.0310884.503568/64.7668392711.475687/5.4404152879.142805/11.1398962777.515280/3.626943MASS2052.763675/6.4766842123.090411/11.1398961150.690864/11.398964404.857470/19.1709844114.380214/2.8497411177.460159/10.8808291553.261634/11.917098767.332823/13.2124351558.036793/6.217617673.483311/13.7305701308.799442/6.7357512525.700131/5.4404151157.282835/14.2487051665.795367/8.031088969.622799/11.1398962236.251124/10.6217621768.310288/9.5854921530.460913/10.621762703.513823/14.7668399.311520/52.0725393781.478640/5.440415783.170102/16.580311Tupu499.010245/24.6113992789.182977/9.8445601176.557896/16.062176335.366353/21.2435233759.854817/4.9222801473.248900/8.2901551637.969909/15.284974444.487258/23.056995729.184899/19.430052326.348924/24.611399530.140976/24.611399834.757176/20.2072541014.747872/11.3989641361.103340/11.398964447.754239/17.8756481313.622745/15.8031092020.767969/9.3264251234.031067/13.730570242.696296/29.5336791209.709716/14.7668395.328121/62.953368678.820813/13.730570Vute5247.001730/8.2901552972.688386/11.3989643141.040872/9.0673584304.014532/12.4352332981.350915/10.8808297944.078280/2.3316063013.186151/13.7305702532.120943/12.1761664688.069751/9.8445608022.399859/3.8860105315.095277/3.6269432075.166168/12.6943013794.597938/12.1761662879.870276/13.2124354364.837110/3.3678763858.872867/8.5492232749.070864/10.8808299917.265191/3.3678768091.176547/3.1088085939.386425/4.4041457670.501815/2.84974143.658700/33.419689

###### Prerequisite
If you want to evaluate the LM on a language `lang`, you must first have a file named `lang.txt` in the `$src_path` directory of [eval_data.sh](eval_data.sh).
For examplel if you want to use the biblical corpus, you can run [scripts/bible.py](scripts/bible.py) :
```
# folder containing the csvs folder
csv_path=
# folder in which the objective folders will be created (mono or para)
output_dir=
# monolingual one ("mono") or parallel one ("para")
data_type=mono
# list of languages to be considered in alphabetical order and separated by a comma
# case of one language
languages=lang,lang
# case of many languages
languages=lang1,lang2,...
old_only : use only old testament
# use only new testament
new_only=True

python ../scripts/bible.py --csv_path $csv_path --output_dir $output_dir --data_type $data_type --languages $languages --new_only $new_only
```
See other parameters in [scripts/bible.py](scripts/bible.py)

###### Data pre-processing
Modify parameters in [eval_data.sh](eval_data.sh)
```
# languages to be evaluated
languages=lang1,lang2,...
chmod +x ../eval_data.sh
../eval_data.sh $languages
```

###### Evaluation

We take the language to evaluate (say `Bulu`), replace the files `test.Bulu.pth` (which was created with the `VOCAB` and `CODE` of `Bafi`, the evaluating language) with `test.Bafi.pth` (since `Bafi` evaluates, the `train.py` script requires that the dataset has the (part of the) name of the `lgs`). Then we just run the evaluation, the results (acc and ppl) we get is the result of LM Bafia on the Bulu language.

```
# evaluating language
tgt_pair=
# folder containing the data to be evaluated (must match $tgt_path in eval_data.sh)
src_path=
# You have to change two parameters in the configuration file used to train the LM which evaluates ("data_path":"$src_path" and "eval_only": "True")
# You must also specify the "reload_model" parameter, otherwise the last checkpoint found will be loaded for evaluation.
config_file=../configs/lm_template.json
# languages to be evaluated
eval_lang=
chmod +x ../scripts/evaluate.sh
../scripts/evaluate.sh $eval_lang
```
When the evaluation is finished you will see a file named `eval.log` in the `$dump_path/$exp_name/$exp_id` folder containing the evaluation results.
**Note** :The description given below is only valid when the LM evaluator has been trained on only one language (and therefore without TLM). But let's consider the case where the basic LM has been trained on `en-fr` and we want to evaluate it on `de` or `de-ru`. `$tgt_pair` deviates from `en-fr`, but `language` varies depending on whether the evaluation is going to be done on one language or two:
- In the case of `de` : `lang=de-de`
- in the case of `de-ru`: `lang=de-ru`.

## IV. References

Please cite [[1]](https://openreview.net/forum?id=Q5ZxoD2LqcI) and/or [[2]](https://arxiv.org/abs/1901.07291) and/or [[3]](https://arxiv.org/abs/1911.02116) if you found the resources in this repository useful.

### On the use of linguistic similarities to improve Neural Machine Translation for African Languages

[1] Tikeng Notsawo Pascal, NANDA ASSOBJIO Brice Yvan and James Assiene
```
@misc{
pascal2021on,
title={On the use of linguistic similarities to improve Neural Machine Translation for African Languages},
author={Tikeng Notsawo Pascal and NANDA ASSOBJIO Brice Yvan and James Assiene},
year={2021},
url={https://openreview.net/forum?id=Q5ZxoD2LqcI}
}
```

### Cross-lingual Language Model Pretraining

[2] G. Lample *, A. Conneau * [*Cross-lingual Language Model Pretraining*](https://arxiv.org/abs/1901.07291) and [facebookresearch/XLM](https://github.com/facebookresearch/XLM)

\* Equal contribution. Order has been determined with a coin flip.

```
@article{lample2019cross,
title={Cross-lingual Language Model Pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={Advances in Neural Information Processing Systems (NeurIPS)},
year={2019}
}
```

### Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

[3] Chelsea Finn, Pieter Abbeel, Sergey Levine [*Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks*](https://arxiv.org/abs/1911.02116) and [cbfinn/maml](https://github.com/cbfinn/maml)

```
@article{Chelsea et al.,
title={Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks},
author={Chelsea Finn, Pieter Abbeel, Sergey Levine},
journal={Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017},
year={2017}
}
```

## License

See the [LICENSE](LICENSE) file for more details.