Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/NVlabs/traffic-behavior-simulation
https://github.com/NVlabs/traffic-behavior-simulation
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/NVlabs/traffic-behavior-simulation
- Owner: NVlabs
- License: other
- Created: 2023-03-09T06:32:28.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-17T03:00:57.000Z (4 months ago)
- Last Synced: 2024-07-18T05:52:46.151Z (4 months ago)
- Language: Python
- Size: 7.51 MB
- Stars: 159
- Watchers: 11
- Forks: 21
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
README
# Traffic Behavior Simulation (tbsim)
TBSIM is a simulation environment designed for data-driven closed-loop simulation of autonomous vehicles. It supports training and evaluation of popular traffic models such as behavior cloning, CVAE, and our new [BITS](https://arxiv.org/abs/2208.12403) model specifically designed for AV simulation. The users can flexibly specify the simulation environment and plug in their own model (learned or analytic) for evaluation.Thanks to [trajdata](https://github.com/NVlabs/trajdata), TBSIM can access data and scenarios from a wide range of public datasets, including [Lyft Level 5](https://woven.toyota/en/prediction-dataset), [nuScenes](https://www.nuscenes.org/nuscenes), and [nuPlan](https://nuplan.org/).
TBSIM is well equiped with abundant util functions, and supports batched simulation in parallel, logging, and replay. We also provide a suite of simulation metrics that measures the safety, liveness, and diversity of the simulation.
## Installation
Install `tbsim`
```angular2html
conda create -n tbsim python=3.8
conda activate tbsim
git clone [email protected]:NVlabs/traffic-behavior-simulation.git tbsim
cd tbsim
pip install -e .
```Install `trajdata`
```
cd ..
git clone ssh://[email protected]:NVlabs/trajdata.git trajdata
cd trajdata
# replace requirements.txt with trajdata_requirements.txt included in tbsim
pip install -e .
```Install `Pplan`
```
cd ..
git clone ssh://[email protected]:NVlabs/spline-planner.git Pplan
cd Pplan
pip install -e .
```Usually the user needs to install torch separately that fits the hardware setup (OS, GPU, CUDA version, etc., check https://pytorch.org/get-started/locally/ for instructions)
## Quick start
### 1. Obtain dataset(s)
We currently support the Lyft Level 5 [dataset](https://woven.toyota/en/prediction-dataset) and the nuScenes [dataset](https://www.nuscenes.org/nuscenes).#### Lyft Level 5:
* Download the Lyft Prediction dataset (only the metadata and the map) and organize the dataset directory as follows:
```
lyft_prediction/
│ aerial_map/
│ semantic_map/
│ meta.json
└───scenes
│ │ sample.zarr
│ │ train_full.zarr
│ │ train.zarr
| | validate.zarr
```#### nuScenes
* Download the nuScenes dataset (with the v1.3 map extension pack) and organize the dataset directory as follows:
```
nuscenes/
│ maps/
│ └── expansion/
│ v1.0-mini/
│ v1.0-trainval/
```
### 2. Train a behavior cloning model
Lyft dataset (set `--debug` flag to suppress wandb logging):
```
python scripts/train.py --dataset_path --config_name l5_bc --debug
```nuScenes dataset (set `--debug` flag to suppress wandb logging):
```
python scripts/train.py --dataset_path --config_name nusc_bc --debug
```See the list of registered algorithms in `configs/registry.py`
### 3. Train BITS model
Lyft dataset:
First train a spatial planner:
```
python scripts/train.py --dataset_path --config_name l5_spatial_planner --debug
```
Then train a multiagent predictor:
```
python scripts/train.py --dataset_path --config_name l5_agent_predictor --debug
```nuScenes dataset:
First train a spatial planner:
```
python scripts/train.py --dataset_path --config_name nusc_spatial_planner --debug
```
Then train a multiagent predictor:
```
python scripts/train.py --dataset_path --config_name nusc_agent_predictor --debug
```See the list of registered algorithms in `configs/registry.py`
### 4. Evaluate a trained model (closed-loop simulation)
```
python scripts/evaluate.py \
--results_root_dir results/ \
--num_scenes_per_batch 2 \
--dataset_path \
--env \
--policy_ckpt_dir \
--policy_ckpt_key \
--eval_class BC \
--render
```### 5. Closed-loop simulation with BITS
With the spatial planner and multiagent predictor trained, one can run BITS simulation with```
python scripts/evaluate.py \
--results_root_dir results/ \
--dataset_path \
--env \
--ckpt_yaml \
--eval_class HierAgentAware \
--render
```
The ckpt_yaml file specifies the checkpoints for the spatial planner and predictor, an example can be found at `evaluation/BITS_example.yaml` with pretrained checkpoints.Pretrained checkpoints can be downloaded at [link](https://drive.google.com/drive/folders/1y3_HO1c721pFrFOYeGGjORV58g6zNEds?usp=drive_link).
You can check the launch.json file if using VS code.
### 6. Closed-loop evaluation of policy with BITS
TBSIM allows the ego to have a separate policy than the rest of the agents. An example command is
```
python scripts/evaluate.py \
--results_root_dir results/ \
--dataset_path \
--env \
--ckpt_yaml \
--eval_class \
--agent_eval_class=HierAgentAware\
--render
```Here your policy should be declared in `tbsim/evaluation/policy_composer.py`.
## BibTeX CitationIf you use TBSIM in a scientific publication, we would appreciate using the following citations:
```
@inproceedings{xu2023bits,
title={Bits: Bi-level imitation for traffic simulation},
author={Xu, Danfei and Chen, Yuxiao and Ivanovic, Boris and Pavone, Marco},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
pages={2929--2936},
year={2023},
organization={IEEE}
}
```