https://github.com/zsxkib/nat-cw2
Particle Swarm Optimisation, Genetic Algorithm/Programming for (Gradient-Free) Neural Network Optimisation
https://github.com/zsxkib/nat-cw2
ga genetic-algorithm genetic-programming gp gradient-free-optimization natural-computing particle-swarm-optimisation pso
Last synced: 6 months ago
JSON representation
Particle Swarm Optimisation, Genetic Algorithm/Programming for (Gradient-Free) Neural Network Optimisation
- Host: GitHub
- URL: https://github.com/zsxkib/nat-cw2
- Owner: zsxkib
- Created: 2020-11-09T13:42:05.000Z (almost 5 years ago)
- Default Branch: main
- Last Pushed: 2020-12-03T14:05:27.000Z (almost 5 years ago)
- Last Synced: 2025-04-01T13:08:36.903Z (6 months ago)
- Topics: ga, genetic-algorithm, genetic-programming, gp, gradient-free-optimization, natural-computing, particle-swarm-optimisation, pso
- Language: Jupyter Notebook
- Homepage:
- Size: 5.05 MB
- Stars: 0
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# PSO, GA and GP for Neural Network Optimisation (Gradient-Free Methods)
Natural Computing 2020/2021: Assignment 2## Abstract
We explore optimisation of Neural Networks with differing amounts of hidden neurons and layers using gradient-free algorithms. We see that in general, classical optimisation methods reminiscent of Stochastic Gradient Descent tend to perform better than gradient-free methods such as Particle Swarm Optimisation, Genetic Algorithms, and Genetic Programming. We see that although optima reached by gradient-free methods are not as 'good' as classical gradient-based methods, they are nevertheless good enough. That is, we reach very small errors on test sets generalising to unseen data. The main difference being time to converge.## Overview
We will explore gradient-free optimisation methods for optimising Neural Networks (NN). Including Particle Swarm Optimisation (PSO), Genetic Algorithms (GA), and Genetic Programming (GP).Gradient-free algorithms will then be compared to baseline NNs optimised using standard gradient-based methods (view Appendix 1 for a description of baseline). Including the Stochastic gradient descent (SGD) classical way to optimise NNs, Adam an optimiser similar to SGD, and Limited memory Broyden Fletcher Goldfarb Shanno (LBFGS) quasi-Newton method for optimisation.
## Install dependencies
```
pip install -r requirements.txt
```## Task 1: Particle Swarm Optimisation
****
Run the notebook "Task1.ipynb" under the directory "Task 1 Particle swarm optimisation" and follow the instructions
## Task 2: Genetic Algorithm
****
Run the notebook "GeneticAlgorithm.ipynb" under the directory "Task 2 Genetic Algorithms" and follow the instructions
## Task 3: Genetic Programming
****
Run the notebook "GeneticProgramming.ipynb" under the directory "Task 3 Genetic Programming" and follow the instructions