https://github.com/gugarosa/mh_fine_tuning
📄 Official implementation regarding the paper "Improving Pre-Trained Weights Through Meta-Heuristic Fine-Tuning".
https://github.com/gugarosa/mh_fine_tuning
fine-tuning implementation machine-learning meta-heuristic optimization paper
Last synced: 13 days ago
JSON representation
📄 Official implementation regarding the paper "Improving Pre-Trained Weights Through Meta-Heuristic Fine-Tuning".
- Host: GitHub
- URL: https://github.com/gugarosa/mh_fine_tuning
- Owner: gugarosa
- License: gpl-3.0
- Created: 2020-06-17T19:04:06.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2022-10-06T12:59:24.000Z (about 3 years ago)
- Last Synced: 2024-10-18T07:40:01.611Z (about 1 year ago)
- Topics: fine-tuning, implementation, machine-learning, meta-heuristic, optimization, paper
- Language: Python
- Homepage:
- Size: 85 KB
- Stars: 1
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Improving Pre-Trained Weights Through Meta-Heuristic Fine-Tuning
*This repository holds all the necessary code to run the very-same experiments described in the paper "Improving Pre-Trained Weights Through Meta-Heuristic Fine-Tuning".*
---
## References
If you use our work to fulfill any of your needs, please cite us:
```BibTex
@inproceedings{deRosa:21,
author={De Rosa, Gustavo H. and Roder, Mateus and Papa, João Paulo and Dos Santos, Claudio F.G.},
booktitle={2021 IEEE Symposium Series on Computational Intelligence (SSCI)},
title={Improving Pre- Trained Weights through Meta - Heuristics Fine- Tuning},
year={2021},
volume={},
number={},
pages={1-8},
doi={10.1109/SSCI50451.2021.9659945}
}
```
---
## Structure
* `core`
* `model.py`: Defines the base Machine Learning architecture;
* `models`
* `cnn.py`: Defines the Residual Network (ResNet18);
* `mlp.py`: Defines the Multi-Layer Perceptron;
* `rnn.py`: Defines the Long Short-Term Memory;
* `outputs`: Folder that holds the saved models and optimization histories, such as `.pth` and `.pkl`;
* `utils`
* `attribute.py`: Re-writes getters and setters for nested attributes;
* `loader.py`: Utility to load datasets and split them into training, validation and testing sets;
* `objects.py`: Wraps objects instantiation for command line usage;
* `optimizer.py`: Wraps the optimization task into a single method;
* `targets.py`: Implements the objective functions to be optimized.
---
## Package Guidelines
### Installation
Install all the pre-needed requirements using:
```Python
pip install -r requirements.txt
```
### Data configuration
In order to run the experiments, you can use `torchvision` and `torchtext` to load pre-implemented datasets.
---
## Usage
### Model Training
The first step is to pre-train a Machine Learning architecture. To accomplish such a step, one needs to use the following script:
```Python
python image_model_training.py -h
```
or
```Python
python text_model_training.py -h
```
*Note that line 74 (for image-based) and 75 (for text-based) should be adjusted on `core/model.py` according to the used script.*
### Model Optimization
After conducting the training task, one needs to optimize the weights over the validation set. Please, use the following script to accomplish such a procedure:
```Python
python image_model_optimization.py -h
```
or
```Python
python text_model_optimization.py -h
```
*Note that `-h` invokes the script helper, which assists users in employing the appropriate parameters.*
### Model Evaluation
Finally, with an optimized models in hands, it is now possible to evaluate the model over the testing set. Please, use the following script to accomplish such a procedure:
```Python
python image_model_evaluation.py -h
```
or
```Python
python text_model_evaluation.py -h
```
### Bash Script
Instead of invoking every script to conduct the experiments, it is also possible to use the provided shell script, as follows:
```Bash
./image_pipeline.sh
```
or
```Bash
./text_pipeline.sh
```
Such a script will conduct every step needed to accomplish the experimentation used throughout this paper. Furthermore, one can change any input argument that is defined in the script.
---
## Support
We know that we do our best, but it is inevitable to acknowledge that we make mistakes. If you ever need to report a bug, report a problem, talk to us, please do so! We will be available at our bests at this repository.
---