{"id":25860007,"url":"https://github.com/trust-ai/SafeBench","last_synced_at":"2025-03-01T21:43:39.362Z","repository":{"id":42865479,"uuid":"504215767","full_name":"trust-ai/SafeBench","owner":"trust-ai","description":"A Benchmark for Evaluating Autonomous Vehicles in Safety-critical Scenarios","archived":false,"fork":false,"pushed_at":"2024-02-23T18:45:57.000Z","size":85707,"stargazers_count":84,"open_issues_count":12,"forks_count":20,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-05-22T00:05:58.385Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://safebench.github.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/trust-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2022-06-16T15:48:45.000Z","updated_at":"2024-05-19T14:11:32.000Z","dependencies_parsed_at":"2024-02-19T20:18:57.667Z","dependency_job_id":null,"html_url":"https://github.com/trust-ai/SafeBench","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/trust-ai%2FSafeBench","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/trust-ai%2FSafeBench/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/trust-ai%2FSafeBench/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/trust-ai%2FSafeBench/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/trust-ai","download_url":"https://codeload.github.com/trust-ai/SafeBench/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":241430308,"owners_count":19961635,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-03-01T21:43:30.064Z","updated_at":"2025-03-01T21:43:39.351Z","avatar_url":"https://github.com/trust-ai.png","language":"Python","readme":"\u003c!--\n * @Date: 2023-01-25 19:36:50\n * @LastEditTime: 2023-04-12 14:02:50\n * @Description: \n--\u003e\n\n\u003cdiv align=\"center\"\u003e\n\n\u003cimg src=\"https://github.com/trust-ai/SafeBench/blob/main/docs/source/images/logo.png\" alt=\"logo\" width=\"400\"/\u003e\n\n\u003ch1\u003eSafeBench: A Benchmark for Evaluating Autonomous Vehicles in Safety-critical Scenarios\u003c/h1\u003e\n\n[![](https://img.shields.io/badge/Documentation-online-green)](https://safebench.readthedocs.io)\n[![](https://img.shields.io/badge/Website-online-green)](https://safebench.github.io)\n[![](https://img.shields.io/badge/Paper-2206.09682-b31b1b.svg)](https://arxiv.org/pdf/2206.09682.pdf)\n[![](https://img.shields.io/badge/License-MIT-blue)](#License)\n\u003c/div\u003e\n\n\n\n| Perception Evaluation | Control Evaluation |\n| :-------------------: | :----------------: | \n| ![perception](https://github.com/safebench/safebench.github.io/blob/master/videos/perception.gif) | ![control](https://github.com/safebench/safebench.github.io/blob/master/videos/control.gif) | \n\n\n## Installation\n\n**Recommended system: Ubuntu 20.04 or 22.04**\n\n### 1. Local Installation\n\n\u003cdetails\u003e\n    \u003csummary\u003e Click to expand \u003c/summary\u003e\n\nStep 1: Setup conda environment\n```bash\nconda create -n safebench python=3.8\nconda activate safebench\n```\n\nStep 2: Clone this git repo in an appropriate folder\n```bash\ngit clone git@github.com:trust-ai/SafeBench.git\n```\n\nStep 3: Enter the repo root folder and install the packages:\n```bash\ncd SafeBench\npip install -r requirements.txt\npip install -e .\n```\n\nStep 4: Download our [CARLA_0.9.13](https://drive.google.com/file/d/139vLRgXP90Zk6Q_du9cRdOLx7GJIw_0v/view?usp=sharing) and extract it to your folder.\n\nStep 5: Run `sudo apt install libomp5` as per this [git issue](https://github.com/carla-simulator/carla/issues/4498).\n\nStep 6: Add the python API of CARLA to the ```PYTHONPATH``` environment variable. You can add the following commands to your `~/.bashrc`:\n```bash\nexport CARLA_ROOT={path/to/your/carla}\nexport PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.13-py3.8-linux-x86_64.egg\nexport PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/agents\nexport PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla\nexport PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI\n```\n\u003c/details\u003e\n\n### 2. Docker Installation (Beta)\n\n\u003cdetails\u003e\n    \u003csummary\u003e Click to expand \u003c/summary\u003e\n\nWe also provide a docker image with CARLA and SafeBench installed. Use the following command to launch a docker container:\n\n```bash\nbash docker/run_docker.sh\n```\n\nThe CARLA simulator is installed at `/home/safebench/carla` and SafeBench is installed at `/home/safebench/SafeBench`.\n\n\u003c/details\u003e\n\n## Usage\n\n### 1. Desktop Users\n\n\u003cdetails\u003e\n    \u003csummary\u003e Click to expand \u003c/summary\u003e\n\nEnter the CARLA root folder, launch the CARLA server and run our platform with\n```bash\n# Launch CARLA\n./CarlaUE4.sh -prefernvidia -windowed -carla-port=2000\n\n# Launch SafeBench in another terminal\npython scripts/run.py --agent_cfg basic.yaml --scenario_cfg standard.yaml --mode eval\n```\n\u003c/details\u003e\n\n### 2. Remote Server Users\n\n\u003cdetails\u003e\n    \u003csummary\u003e Click to expand \u003c/summary\u003e\n\nEnter the CARLA root folder, launch the CARLA server with headless mode, and run our platform with\n```bash\n# Launch CARLA\n./CarlaUE4.sh -prefernvidia -RenderOffScreen -carla-port=2000\n\n# Launch SafeBench in another terminal\nSDL_VIDEODRIVER=\"dummy\" python scripts/run.py --agent_cfg basic.yaml --scenario_cfg standard.yaml --mode eval\n```\n\n(Optional) You can also visualize the pygame window using [TurboVNC](https://sourceforge.net/projects/turbovnc/files/).\nFirst, launch CARLA with headless mode, and run our platform on a virtual display.\n```bash\n# Launch CARLA\n./CarlaUE4.sh -prefernvidia -RenderOffScreen -carla-port=2000\n\n# Run a remote VNC-Xserver. This will create a virtual display \"8\".\n/opt/TurboVNC/bin/vncserver :8 -noxstartup\n\n# Launch SafeBench on the virtual display\nDISPLAY=:8 python scripts/run.py --agent_cfg basic.yaml --scenario_cfg standard.yaml --mode eval\n```\n\nYou can use the TurboVNC client on your local machine to connect to the virtual display.\n```bash\n# Use the built-in SSH client of TurboVNC Viewer\n/opt/TurboVNC/bin/vncviewer -via user@host localhost:n\n\n# Or you can manually forward connections to the remote server by\nssh -L fp:localhost:5900+n user@host\n# Open another terminal on local machine\n/opt/TurboVNC/bin/vncviewer localhost::fp\n```\nwhere `user@host` is your remote server, `fp` is a free TCP port on the local machine, and `n` is the display port specified when you started the VNC server on the remote server (\"8\" in our example).\n\n\u003c/details\u003e\n\n### 3. Visualization with CarlaViz\n\n\u003cdetails\u003e\n    \u003csummary\u003e Click to expand \u003c/summary\u003e\n\n![carlaviz](./docs/source/images/carlaviz.png)\nCarlaViz is a convenient visualization tool for CARLA developed by a former member [mjxu96](https://github.com/mjxu96) of our team. To use CarlaViz, please open another terminal and follow the intructions:\n```bash\n# pull docker image from docker hub\ndocker pull mjxu96/carlaviz:0.9.13\n\n# run docker container of CarlaViz\ncd Safebench/scripts\nsh start_carlaviz.sh\n```\nThen, you can open the CarlaViz window at http://localhost:8080. You can also remotely access the CarlaViz window by forwarding the port 8080 to your local machine.\n\u003c/details\u003e\n\n### 4. Scenic users\n\n\u003cdetails\u003e\n    \u003csummary\u003e Click to expand \u003c/summary\u003e\n\nIf you want to use scenic to control the surrounding adversarial agents, and use RL to control the ego, then first install scenic as follows:\n\n```bash\n# Download Scenic repository\ngit clone https://github.com/BerkeleyLearnVerify/Scenic.git\ncd Scenic\npython -m pip install -e .\n```\n\nThen you can create a directory in ```safebench/scenario/scenario_data/scenic_data```, e.g., ```Carla_Challenge```, and put your scenic files in that directory (the relative map path defined in scenic file should be ```../maps/*.xodr```).\n\nNext, set the param ```scenic_dir``` in ```safebench/scenario/config/scenic.yaml``` with the directory where you store the scenic files, e.g., ```safebench/scenario/scenario_data/scenic_data/Carla_Challenge```, and our code will automatically load all scenic files in that directory.\n\nFor selecting the most adversarial scenes, the param ```sample_num``` within the ```scenic.yaml``` serves to determine the number of scenes sampled for each scenic file and the param ```select_num``` is used to specify the number of the most adversarial scenes to be selected from among the sample_num scenes:\n\n```bash\npython scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode train_scenario\n```\n\nNow you can test the ego with these selected adversarial scenes:\n\n```bash\npython scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode eval\n```\n\nOr if you want to Launch it on the virtual display:\n\n```bash\nDISPLAY=:8 python scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode train_scenario\nDISPLAY=:8 python scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode eval\n``` \n\u003c/details\u003e\n\n## Running Arguments\n\n| Argument | Choice | Usage |\n| :----: | :----: | :---- |\n| `mode` | `{train_agent, train_scenario, eval}` | We provide three modes for training agent, training scenario, and evaluation. |\n| `agent_cfg`      | str  |  path to the configuration file of agent. |\n| `scenario_cfg`   | str  |  path to the configuration file of scenario. |\n| `max_episode_step`      | int     | Number of episode used for training agents and scenario. |\n| `num_scenario`  | `{1, 2, 3, 4}` | We support running multiple scenarios in parallel. Current map allows at most 4 scenarios. |\n| `save_video`    | store_true     |  We support saving videos during the evaluation mode. | \n| `auto_ego`      | store_true     |  Overwrite the action of ego agent with auto-polit |\n| `port`      | int     |  Port used by Carla, default 2000 |\n","funding_links":[],"categories":["\u003ca id=\"tools\"\u003e\u003c/a\u003e🛠️ Tools"],"sub_categories":["Model Evaluation"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftrust-ai%2FSafeBench","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftrust-ai%2FSafeBench","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftrust-ai%2FSafeBench/lists"}