Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/suicao/ai4code-baseline
Early solution for Google AI4Code competition
https://github.com/suicao/ai4code-baseline
Last synced: about 4 hours ago
JSON representation
Early solution for Google AI4Code competition
- Host: GitHub
- URL: https://github.com/suicao/ai4code-baseline
- Owner: suicao
- Created: 2022-05-25T04:57:47.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2022-05-26T00:28:48.000Z (over 2 years ago)
- Last Synced: 2023-08-06T10:22:36.385Z (over 1 year ago)
- Language: Python
- Homepage:
- Size: 18.6 KB
- Stars: 76
- Watchers: 1
- Forks: 34
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## Early solution for [ Google AI4Code](https://www.kaggle.com/competitions/AI4Code) competition
### Overview
This solution is based on Amet Erdem's [baseline](https://www.kaggle.com/code/aerdem4/ai4code-pytorch-distilbert-baseline) . Instead of predicting the cell position with only the markdown itself, we randomly sample up to 20 code cells to act as the global context. So your input will look something like this:
``` Markdown content Code content 1 Code content 2 ... Code content 20 ```Ez pz.
### Preprocessing
To extract features for training, including the markdown-only dataframes and sampling the code cells needed for each note book, simply run:```$ python preprocess.py```
Your outputs will be in the ```./data``` folder:
```
project
│ train_mark.csv
│ train_fts.json
| train.csv
│ val_mark.csv
│ val_fts.json
│ val.csv
```### Training
I found ```codebert-base``` to be the best of all the transformers:```$ python train.py --md_max_len 64 --total_max_len 512 --batch_size 8 --accumulation_steps 4 --epochs 5 --n_workers 8```
The validation scores should read 0.84+ after 3 epochs, and also correlates well with the public LB.
### Inference
Please refer to my public notebook: https://www.kaggle.com/code/suicaokhoailang/stronger-baseline-with-code-cells