Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/gabri95/neat_competition
https://github.com/gabri95/neat_competition
autonomous-driving computational intelligence neat network neural neuroevolution pso racing torcs
Last synced: 14 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/gabri95/neat_competition
- Owner: Gabri95
- Created: 2017-11-28T17:11:15.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2017-12-12T14:34:03.000Z (about 7 years ago)
- Last Synced: 2024-12-02T00:33:37.629Z (2 months ago)
- Topics: autonomous-driving, computational, intelligence, neat, network, neural, neuroevolution, pso, racing, torcs
- Language: Python
- Size: 22.4 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.me
Awesome Lists containing this project
README
UvA/VU, MSc Artificial Intelligence, Computational Intelligence course project
Davide Belli, Gabriele Cesa, Linda Petrini
Starting from the main directory of the project (we will call it ):
- To train the model:
python src/nn_evolve.py -g -f -p -o [-c ] [-s ]
-g : by default is 10 if not specified
-o : set it to /neat/, i.e. the current directory: use '-o .'
if not specified nothing is stored to files
-f : how frequently store results (checkpoints and best.pickle) form the training (results are stored every
generations). By default it is equal to , and so, results are stored only at
the end of the training
-c : if set initialize the neat algorithm (the population) with the result stored in the file
(which should have been created in a previous run of the algorithm in the )-p : port to use for making the server and the client communicate (it has to be in the range [3001, 3010]).
Moreover, it has to be the one corresponding to the "src_server" driver set in the "configuration.xml" file.
(N.B.: make sure the port is available)
-s the file to run (expected to be found in the client directory) to run the client. It should accept as command line
parameters: the port to use (-p), which file to use to store results (-o) and which driver configuration to use (-d).
By default it is 'start.sh'. (See next section for more details about the client start file)N.B.: by default the script expect a number of file to be present in but it is possible to pass their path through the "-d", "-e", "-n" and "-x" options.
However, it is much more encouraged to use the following approach to run an experiment:- build the directory of the experiment. It will be out
- create inside the following 3 files: 'nn_config', 'fitness.py', 'subsumption.template' and the following directory: 'configuration'+ 'nn_config': file containing the configurations of the NEAT algorithm. Make sure the number of "input" and "output" correspond
to the inputs you are actually considering and to the outputs that the driver you set expects. Here, you can also
set other parameters as the size of the population, the kind of initializzation of the genomes, which activation
functions the neurons can use, etc. Shown below are a few other important parameters:- initial_connection: can be "partial ", "fully_connected", "fs_neat" or a path (relative to the
directory where the 'nn_config' file is stored, i.e. ) where initialization
files are stored: files with extensions ".params" there are read and their values used to
initialize the same number of genomes in the initial population. If there are not enough
initialization file to generate the whole population, the missing genomes are generated
as with "fully_connected" (and adding the number of hidden neurons specified in the
configurations)
- output_activation_functions: if set, a LIST of activation functions, one for each output neuron, is expected.
Hence, make sure the length of that list is equal to the number of output neuron
you set. If this parameter is not set, the activation functions set in
"activation_functions" will be used.
+ 'subsumption.template': configuration file in IDI format which explains how the driver is built (following a subsumption scructure).
You have to use as the title of the sections (strings between the '[]') 'Layer ', where is the number of
the layer from the bottom of the subsumption structure you are defining (counting starts from 1).
Each layer/section has 1 or 2 otpions:
- 'type' specifies the name of the python class which implements the desired layer
- 'model_path' specifies the path (N.B.: relative to the path of the folder containing this file) to the
model (usually a Pickle file) which is used in this layer. Please, notice not all layers
require it (eg. UnstuckLayer doesn't). If present, set this parameter to '$phenotype' to make
the training algorithm set it dynamically with the path to the next phenotype to evaluate.
+ 'fitness.py': Python file which has to contain a function with name "evaluate" and accepting one parameter (results).
This parameters will be a dictionary with:
- keys: the names of the races run (corresponds to the name of the configuration files without the extension;
see next point 'configuration')
- values: a list of entries relative at that race. Each entry is a list of values stored by the client every 100
steps (and at the end of the simulation).
This function has to return a scalar value representing the fitness of a genome given these data about its
performances on the races.
- 'configuration': directory containing a number of XML files, each describing the configuration for a race. Make sure that there is
a "scr_server" driver among the drivers and that its "idx" correspond to the you want to use
(idx=0 -> port=3001, ..., idx=8 -> port=3009).
Here, you can also set the track, either the number of laps or the length of the race (in km) and the opponents
which will race against your driver. The name of each file, without the extension, will be used to indicate the
corresponding race in the logs and in the records of the race.
+ 'configuration.xml': if you just need to run a race you can instead use only one configuration file with this name in the
(instead of the 'configuration' directory)
N.B.: It is strongly suggested to use the example files provided and edit them to use your own fitness fuction/track/network shape/ etc..
- create a "run.sh" file (again, it is better to use the example one and change its parameters inside):
This file is used to run more easily the experiment: it just run "nn_evolve.py" with a set of command line parameters you set.
- from inside your run "./run.sh"
- if nothing crashes, wait for answers (meanwhile you can stare at the beautiful plots the script builds in your )
- To try a model:
1) run the torcs simulator with:
torcs -nodamage -nofuel -nolaptime
go to quickrace and in the 'select drivers' configuration, set as driver src_server_[n] (where [n] is in [1, 10])
N.B.: if you are still running some experiments, make sure now you don't use a port you are also using in those experiments.
2) in another terminal:
cd torcs-client
./start.sh -p 300[n] -d [-o ]
(if you used during the training a different file, you should use it now instead of start.sh)
: configuration file as the 'subsumption.template' file explained before.
It has to contain all the information to build a driver (N.B.: obviously you can't use '$phenotype' here anymore, but
you have to set the path to the pickle file you want to use, containing the controller for the layer).
It can be useful to create a 'best.conf' file in your experiment folder equal to the 'subsumption.template' but
setting the path of the model to 'best.pickle'. This file is automatically created by the algorithm at the end of each
generation in the experiment directory ().
: path to the file where to store the final results of the race
(the information which usually are used to compute the fitness function)
During the experiments it is set to '/model_results/'.
However, you probably don't want to set it now, so just skip it.The other parameter (-p) has the same meaning as the one explained at the beginning for the "nn_evolve.py" script.
N.B.: newer kinds of drivers have been implemented as 'general_driver.py' and 'composite_driver.py'. They requires slightly different configuration file to be passed (-d) and can be run using respectively 'run2.py' and 'run4.py' in the torcs-client folder. For example, launching the 'start4.sh' will in turn run the run4.py script in order to use the 'composite_driver'.
The configuration files for these 2 drivers are quite similar:- There has to be a section (title between '[]') named 'Models' with:
+ 2 options: 'model1' and 'model2' each of them contains a list of sections titles which have to describe a particular
layer in that model
- for each of the layer listed in the 2 options explained above, there has to be a section describing that layer (it has the same structure
as the configuration file before)Moreover, if the driver is 'composite_driver', the section 'Models' has to have another option 'oracle', which contains the path (relative to the configuration file) to the oracle model which chooses how to weigh the 2 models at each timestep. Furthermore, it is possible to use also a 'toplayer' option where anoter list of layers can be set: those layers will be stacked upon the oracle (and so upon the 2 models). They have been used for the unstuck layer, which we didn't want to be affected by the oracle but to have an higher priority.