Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/praveen-palanisamy/macad-gym
Multi-Agent Connected Autonomous Driving (MACAD) Gym environments for Deep RL. Code for the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:
https://github.com/praveen-palanisamy/macad-gym
autonomous-driving carla carla-driving-simulator carla-gym carla-reinforcement-learning carla-rl carla-simulator deep-reinforcement-learning gym-environments macad-gym multi-agent-autonomous-driving multi-agent-reinforcement-learning
Last synced: about 5 hours ago
JSON representation
Multi-Agent Connected Autonomous Driving (MACAD) Gym environments for Deep RL. Code for the paper presented in the Machine Learning for Autonomous Driving Workshop at NeurIPS 2019:
- Host: GitHub
- URL: https://github.com/praveen-palanisamy/macad-gym
- Owner: praveen-palanisamy
- License: mit
- Created: 2019-05-14T20:20:03.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2023-05-20T22:19:30.000Z (over 1 year ago)
- Last Synced: 2024-09-21T07:21:50.300Z (about 2 months ago)
- Topics: autonomous-driving, carla, carla-driving-simulator, carla-gym, carla-reinforcement-learning, carla-rl, carla-simulator, deep-reinforcement-learning, gym-environments, macad-gym, multi-agent-autonomous-driving, multi-agent-reinforcement-learning
- Language: Python
- Homepage: https://arxiv.org/abs/1911.04175
- Size: 2.02 MB
- Stars: 327
- Watchers: 10
- Forks: 73
- Open Issues: 15
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
README
![MACAD-Gym learning environment 1](docs/images/macad-gym-urban_4way_intrx_2c1p1m.png)
[MACAD-Gym](https://arxiv.org/abs/1911.04175) is a training platform for Multi-Agent Connected Autonomous
Driving (MACAD) built on top of the CARLA Autonomous Driving simulator.MACAD-Gym provides OpenAI Gym-compatible learning environments for various
driving scenarios for training Deep RL algorithms in homogeneous/heterogenous,
communicating/non-communicating and other multi-agent settings. New environments and scenarios
can be easily added using a simple, JSON-like configuration.[![PyPI version fury.io](https://badge.fury.io/py/macad-gym.svg)](https://pypi.python.org/pypi/macad-gym/)
[![PyPI format](https://img.shields.io/pypi/pyversions/macad-gym.svg)](https://pypi.python.org/pypi/macad-gym/)
[![Downloads](https://pepy.tech/badge/macad-gym)](https://pepy.tech/project/macad-gym)
### Quick StartInstall MACAD-Gym using `pip install macad-gym`.
If you have `CARLA_SERVER` setup, you can get going using the following 3 lines of code. If not, follow the
[Getting started steps](#getting-started).#### Training RL Agents
```python
import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")# Your agent code here
```Any RL library that supports the OpenAI-Gym API can be used to train agents in MACAD-Gym. The [MACAD-Agents](https://github.com/praveen-palanisamy/macad-agents) repository provides sample agents as a starter.
#### Visualizing the Environment
To test-drive the environments, you can run the environment script directly. For example, to test-drive the `HomoNcomIndePOIntrxMASS3CTWN3-v0` environment, run:
```bash
python -m macad_gym.envs.homo.ncom.inde.po.intrx.ma.stop_sign_3c_town03
```### Usage guide
- [Getting Started](#getting-started)
- [Learning Platform and Agent Interface](#learning-platform-and-agent-interface)
- [Environments](#environments)
- [Agent interface](#agent-interface)
- [Citing MACAD-Gym](#citing)
- [Developer Contribution Guide](CONTRIBUTING.md)### Getting Started
> Assumes an Ubuntu (18.04/20.04/22.04 or later) system.
> If you are on Windows 10/11, use the CARLA Windows package and set the `CARLA_SERVER` environment variable to the CARLA installation directory.1. Install the system requirements:
- Miniconda/Anaconda 3.x
- `wget -P ~ https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh; bash ~/Miniconda3-latest-Linux-x86_64.sh`
- cmake (`sudo apt install cmake`)
- zlib (`sudo apt install zlib1g-dev`)
- [optional] ffmpeg (`sudo apt install ffmpeg`)
1. Setup CARLA (0.9.x)3.1 `mkdir ~/software && cd ~/software`
3.2 Example: Download the 0.9.13 release version from: [Here](https://github.com/carla-simulator/carla/releases)
Extract it into `~/software/CARLA_0.9.13`
3.3 `echo "export CARLA_SERVER=${HOME}/software/CARLA_0.9.13/CarlaUE4.sh" >> ~/.bashrc`1. Install MACAD-Gym:
- **Option1 for users** : `pip install macad-gym`
- **Option2 for developers**:
- Fork/Clone the repository to your workspace:
`git clone https://github.com/praveen-palanisamy/macad-gym.git && cd macad-gym`
- Create a new conda env named "macad-gym" and install the required packages:
`conda env create -f conda_env.yml`
- Activate the `macad-gym` conda python env:
`source activate macad-gym`
- Install the `macad-gym` package:
`pip install -e .`
- Install CARLA PythonAPI: `pip install carla==0.9.13`
> NOTE: Change the carla client PyPI package version number to match with your CARLA server version
### Learning Platform and Agent Interface
The MACAD-Gym platform provides learning environments for training agents in both,
single-agent and multi-agent settings for various autonomous driving tasks and
scenarios that enables training agents in homogeneous/heterogeneous
The learning environments follows naming convention for the ID to be consistent
and to support versioned benchmarking of agent algorithms.
The naming convention is illustrated below with `HeteCommCoopPOUrbanMgoalMAUSID`
as an example:
![MACAD-Gym Naming Conventions](docs/images/macad-gym-naming-conventions.png)The number of training environments in MACAD-Gym is expected to grow over time
(PRs are very welcome!).#### Environments
The environment interface is simple and follows the widely adopted OpenAI-Gym
interface. You can create an instance of a learning environment using the
following 3 lines of code:```python
import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
```Like any OpenAI Gym environment, you can obtain the observation space and action
spaces as shown below:```bash
>>> print(env.observation_space)
Dict(car1:Box(168, 168, 3), car2:Box(168, 168, 3), car3:Box(168, 168, 3))
>>> print(env.action_space)
Dict(car1:Discrete(9), car2:Discrete(9), car3:Discrete(9))
```To get a list of available environments, you can use
the `list_available_envs()` function as shown in the code snippet below:```python
import gym
import macad_gym
macad_gym.list_available_envs()
```
This will print the available environments. Sample output is provided below for reference:```bash
Environment-ID: Short description
{'HeteNcomIndePOIntrxMATLS1B2C1PTWN3-v0': 'Heterogeneous, Non-communicating, '
'Independent,Partially-Observable '
'Intersection Multi-Agent scenario '
'with Traffic-Light Signal, 1-Bike, '
'2-Car,1-Pedestrian in Town3, '
'version 0',
'HomoNcomIndePOIntrxMASS3CTWN3-v0': 'Homogenous, Non-communicating, '
'Independed, Partially-Observable '
'Intersection Multi-Agent scenario with '
'Stop-Sign, 3 Cars in Town3, version 0'}
```#### Agent interface
The Agent-Environment interface is compatible with the OpenAI-Gym interface
thus, allowing for easy experimentation with existing RL agent algorithm
implementations and libraries. You can use any existing Deep RL library that supports the Open AI Gym API to train your agents.The basic agent-environment interaction loop is as follows:
```python
import gym
import macad_gymenv = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
configs = env.configs
env_config = configs["env"]
actor_configs = configs["actors"]class SimpleAgent(object):
def __init__(self, actor_configs):
"""A simple, deterministic agent for an example
Args:
actor_configs: Actor config dict
"""
self.actor_configs = actor_configs
self.action_dict = {}def get_action(self, obs):
""" Returns `action_dict` containing actions for each agent in the env
"""
for actor_id in self.actor_configs.keys():
# ... Process obs of each agent and generate action ...
if env_config["discrete_actions"]:
self.action_dict[actor_id] = 3 # Drive forward
else:
self.action_dict[actor_id] = [1, 0] # Full-throttle
return self.action_dictagent = SimpleAgent(actor_configs) # Plug-in your agent or use MACAD-Agents
for ep in range(2):
obs = env.reset()
done = {"__all__": False}
step = 0
while not done["__all__"]:
obs, reward, done, info = env.step(agent.get_action(obs))
print(f"Step#:{step} Rew:{reward} Done:{done}")
step += 1
env.close()
```### Citing:
If you find this work useful in your research, please cite:
```bibtex
@misc{palanisamy2019multiagent,
title={Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning},
author={Praveen Palanisamy},
year={2019},
eprint={1911.04175},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```Citation in other Formats: (Click to View)
MLAPalanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).APAPalanisamy, P. (2019). Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.ChicagoPalanisamy, Praveen. "Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning." arXiv preprint arXiv:1911.04175 (2019).HarvardPalanisamy, P., 2019. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175.VancouverPalanisamy P. Multi-Agent Connected Autonomous Driving using Deep Reinforcement Learning. arXiv preprint arXiv:1911.04175. 2019 Nov 11.### **NOTEs**:
- MACAD-Gym supports multi-GPU setups and it will choose the GPU that is less loaded to launch the simulation needed for the RL training environment- MACAD-Gym is for CARLA 0.9.x & above . If you are
looking for an OpenAI Gym-compatible agent learning environment for CARLA 0.8.x (stable release),
use [this carla_gym environment](https://github.com/PacktPublishing/Hands-On-Intelligent-Agents-with-OpenAI-Gym/tree/master/ch8/environment).