https://github.com/alexanderkell/carbon_optimiser
Use of reinforcement learning to optimise carbon tax to reduce carbon emissions and costs
https://github.com/alexanderkell/carbon_optimiser
Last synced: 3 months ago
JSON representation
Use of reinforcement learning to optimise carbon tax to reduce carbon emissions and costs
- Host: GitHub
- URL: https://github.com/alexanderkell/carbon_optimiser
- Owner: alexanderkell
- License: mit
- Created: 2019-02-28T19:41:35.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2022-12-08T01:40:17.000Z (over 2 years ago)
- Last Synced: 2025-01-10T14:04:04.709Z (5 months ago)
- Language: Python
- Size: 5.37 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 21
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Carbon Optimiser
================This repository contains code to optimise the long-term strategy of setting carbon tax to reduce carbon emissions and costs of an electricity market.
We use [ElecSim](https://github.com/alexanderkell/elecsim) as the simulation model and [Ray RLlib](https://ray.readthedocs.io/en/latest/rllib.html) as a package for distributed reinforcement learning.
Usage
-----The WorldEnvironment class of ElecSim is used as an [OpenAI gym](https://gym.openai.com/) interface to the reinforcement learning algorithm
We use the [Ray RLlib](https://ray.readthedocs.io/en/latest/rllib.html) for distributed reinforcement learning algorithms. We then run a number of different reinforcement algorithm experiments. An example of this is shown [here](https://github.com/alexanderkell/carbon_optimiser/blob/master/src/models/carbon_optimiser_northern_ireland.py).
Installation
------------For this, the installation of ``elecsim`` and ``ray[rllib]`` are required.
To do so, make sure you have installed python and the python installer, pip and run the following commands:
```
pip install elecsim
pip install ray[rllib]
```Once this is done, you can run your desired reinforcement algorithm as shown [here](https://github.com/alexanderkell/carbon_optimiser/blob/master/src/models/carbon_optimiser_northern_ireland.py).
Docker
------This can be run with your own custom reinforcement learning file through docker.
Simply pull from dockerhub with the following command:
```
docker pull alexkell/carbon-optimiser
```
Next, to run the reinforcement algorithm whilst saving the run data and ray results data run the following command:
```
docker run --shm-size=2G -it -v :/myvol -v :/root/ray_results alexkell/carbon-optimiser:latest
```
Replacing the paths in ```"<>"``` with your own directories.- `````` is where data is written that is output from elecsim. Enabling you to visualise individual run characteristics such as carbon tax, cost of electricty and electricity supply type.
- `````` is where you would like data output from ray rllib to be written. This provides information on the reinforcement algorithm which can be visualised using [tensorboard](https://www.tensorflow.org/guide/summaries_and_tensorboard). Checkpoints of the weights for the reinforcement algorithm are also saved here
- Finally, `````` is the path where you have stored your version of [reinforcement algorithm](https://github.com/alexanderkell/carbon_optimiser/blob/master/src/models/carbon_optimiser_northern_ireland.py).Compatibility
-------------Ray RLlib is not compatible with windows, however it will run on unix based systems (linux, mac os)
Licence
-------MIT License
Authors
-------`carbon_optimiser` was written by `Alexander Kell `.