Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-LLM4AD
A curated list of awesome LLM for Autonomous Driving resources (continually updated)
https://github.com/Thinklab-SJTU/Awesome-LLM4AD
Last synced: 4 days ago
JSON representation
-
ICLR 2024 Open Review
- RADIal
- Large Language Models as Decision Makers for Autonomous Driving
- DriveGPT4: Interpretable End-to-end Autonomous Driving via Large Language Model
- BEV-CLIP: Multi-modal BEV Retrieval Methodology for Complex Scene in Autonomous Driving
- GPT-Driver: Learning to Drive with GPT
- Radar Spectra-language Model for Automotive Scene Parsing
- GeoDiffusion: Text-Prompted Geometric Control for Object Detection Data Generation
- SPOT: Scalable 3D Pre-training via Occupancy Prediction for Autonomous Driving
- 3D DENSE CAPTIONING BEYOND NOUNS: A MIDDLEWARE FOR AUTONOMOUS DRIVING
-
Papers
- ChatScene
- DriveSim
- OmniDrive
- AirSim
- HighwayEnv
- Large Language Models Powered Context-aware Motion Prediction
- WOMD
- LLM-Assisted-Light
- ChatSim
- DriveVLM
- GenAD
- RAG-Driver
- LLaDA
- LangProp
- official
- DriveLM: Driving with Graph Visual Question Answering
- OpenLane-V2
- official
- official
- official
- Evaluation of Large Language Models for Decision Making in Autonomous Driving
- official
- official
- Talk2car
- Dolphins
- Drive-WM
- official
- WiDEVIEW
- LLaVA-7B-1.5 - diffusion](https://github.com/CompVis/latent-diffusion)
- Agent-Driver
- DriveCmd
- UCU Dataset
- official
- MagicDrive
- ComplexUrbanScenarios
- official
- official
- Talk2BEV
- GPT-Driver
- CitySim - Based vehicle trajectory dataset.
- official
- official
- official
- ADAPT
- CARLA
- AGENTSCODRIVER: Large Language Model Empowered Collaborative Driving with Lifelong Learning
- Multi-Frame, Lightweight & Efficient Vision-Language Models for Question Answering in Autonomous Driving
- official
- Mapillary - dataset.com/), [nuImages](https://www.nuscenes.org/nuimages), [BDD100k](https://www.vis.xyz/bdd100k/), [Waymo](https://waymo.com/open/), [KITTI](https://www.cvlibs.net/datasets/kitti/)
- highD
- AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving
- Engineering Safety Requirements for Autonomous Driving with Large Language Models
- LeGo-Drive: Language-enhanced Goal-oriented Closed-Loop End-to-End Autonomous Driving
- LeGo-Drive
- LeGo-Drive
- Hybrid Reasoning Based on Large Language Models for Autonomous Car Driving
- Large Language Models Powered Context-aware Motion Prediction
- MTR
- Generalized Predictive Model for Autonomous Driving
- DriveAGI
- DriveLM: Driving with Graph Visual Question Answering
- LingoQA: Video Question Answering for Autonomous Driving
- DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral Planning States for Autonomous Driving
- Carla
- ONCE
- Large Language Models for Autonomous Driving: Real-World Experiments
- official
- LMDrive: Closed-Loop End-to-End Driving with Large Language Models
- Evaluation of Large Language Models for Decision Making in Autonomous Driving
- LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs
- Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving
- Dolphins: Multimodal Language Model for Driving
- Dolphins
- GQA
- MSCOCO - VQA](https://okvqa.allenai.org/), [TDIUC](https://kushalkafle.com/projects/tdiuc.html), [Visual Genome dataset](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html)
- Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving
- Drive-WM
- Empowering Autonomous Driving with Large Language Models: A Safety Perspective
- GPT-4V Takes the Wheel: Evaluating Promise and Challenges for Pedestrian Behavior Prediction
- JAAD
- PIE
- ADriver-I: A General World Model for Autonomous Driving
- A Language Agent for Autonomous Driving
- UniAD
- Human-Centric Autonomous Systems With LLMs for User Command Reasoning
- On the Road with GPT-4V(ision): Early Explorations of Visual-Language Model on Autonomous Driving
- ChatGPT as Your Vehicle Co-Pilot: An Initial Attempt
- MagicDrive: Street View Generation with Diverse 3D Geometry Control
- MagicDrive
- Receive, Reason, and React: Drive as You Say with Large Language Models in Autonomous Vehicles
- video
- DrivingDiffusion: Layout-Guided multi-view driving scene video generation with latent diffusion model
- official
- official
- DriveGPT4: Interpretable End-to-end Autonomous Driving via Large Language Model
- official
- GPT-DRIVER: LEARNING TO DRIVE WITH GPT
- official
- GAIA-1: A Generative World Model for Autonomous Driving
- DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models
- SurrealDriver: Designing Generative Driver Agent Simulation Framework in Urban Contexts based on Large Language Model
- Drive as You Speak: Enabling Human-Like Interaction with Large Language Models in Autonomous Vehicles
- DriveDreamer: Towards Real-world-driven World Models for Autonomous Driving
- official
- Can you text what is happening? Integrating pre-trained language encoders into trajectory prediction models for autonomous driving
- TrafficGPT: Viewing, Processing and Interacting with Traffic Foundation Models
- HiLM-D: Towards High-Resolution Understanding in Multimodal Large Language Models for Autonomous Driving
- MTD-GPT: A Multi-Task Decision-Making GPT Model for Autonomous Driving at Unsignalized Intersections
- Domain Knowledge Distillation from Large Language Model: An Empirical Study in the Autonomous Driving Domain
- Drive Like a Human: Rethinking Autonomous Driving with Large Language Models
- Language-Guided Traffic Simulation via Scene-Level Diffusion
- ADAPT: Action-aware Driving Caption Transformer
- DriveVLM: The Convergence of Autonomous Driving and Large Vision-Language Models
- LimSim++
- AccidentGPT: Accident Analysis and Prevention from V2X Environmental Perception with Multi-modal Large Model
- AccidentGPT
- VLP: Vision Language Planning for Autonomous Driving
- DME-Driver: Integrating Human Decision Logic and 3D Scene Perception in Autonomous Driving
- Holistic Autonomous Driving Understanding by Bird’s-Eye-View Injected Multi-Modal Large Models
- LLM-ASSIST: Enhancing Closed-Loop Planning with Language-Based Reasoning
- LLM-ASSIST
- LeGEND: A Top-Down Approach to Scenario Generation of Autonomous Driving Systems Assisted by Large Language Models
- LeGEND
- MiniDrive: More Efficient Vision-Language Models with Multi-Level 2D Features as Text Tokens for Autonomous Driving
- MiniDrive
- OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving
- ContextVLM: Zero-Shot and Few-Shot Context Understanding for Autonomous Driving using Vision Language Models
- ContextVLM
- DriveGenVLM: Real-world Video Generation for Vision Language Model based Autonomous Driving
- Edge-Cloud Collaborative Motion Planning for Autonomous Driving with Large Language Models
- EC-Drive
- V2X-VLM: End-to-End V2X Cooperative Autonomous Driving Through Large Vision-Language Models
- V2X-VLM
- V2X-VLM
- DAIR-V2X
- VLM-MPC: Vision Language Foundation Model (VLM)-Guided Model Predictive Controller (MPC) for Autonomous Driving
- SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving
- WOMD-Reasoning: A Large-Scale Language Dataset for Interaction and Driving Intentions Reasoning
- WOMD-Reasoning
- Exploring the Potential of Multi-Modal AI for Driving Hazard Prediction
- DHPR
- RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model
- Dolphins
- CitySim - Based vehicle trajectory dataset.
- official
- Testing Large Language Models on Driving Theory Knowledge and Skills for Connected Autonomous Vehicles
- UK Driving Theory Test Practice Questions and Answers
- KoMA: Knowledge-driven Multi-agent Framework for Autonomous Driving with Large Language Models
- KoMA
- Asynchronous Large Language Model Enhanced Planner for Autonomous Driving
- AsyncDriver
- nuPlan Closed-Loop Reactive Hard20
- Embodied Understanding of Driving Scenarios
- ELM
- AGENTSCODRIVER: Large Language Model Empowered Collaborative Driving with Lifelong Learning
- LanguageMPC: Large Language Models as Decision Makers for Autonomous Driving
- Driving Everywhere with Large Language Model Policy Adaptation
- LimSim++
- LangProp: A code optimization framework using Language Models applied to driving
- LLM-Assisted Light: Leveraging Large Language Model Capabilities for Human-Mimetic Traffic Signal Control in Complex Urban Environments
- DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation
- DriveDreamer-2
- Editable Scene Simulation for Autonomous Driving via Collaborative LLM-Agents
- GenAD: Generative End-to-End Autonomous Driving
- A Superalignment Framework in Autonomous Driving with Large Language Models
- PlanAgent: A Multi-modal Large Language Agent for Closed-loop Vehicle Motion Planning
- ChatScene: Knowledge-Enabled Safety-Critical Scenario Generation for Autonomous Vehicles
- Probing Multimodal LLMs as World Models for Driving
- OmniDrive: A Holistic LLM-Agent Framework for Autonomous Driving with 3D Perception Reasoning and Planning
- REvolve: Reward Evolution with Large Language Models for Autonomous Driving
- LanguageMPC: Large Language Models as Decision Makers for Autonomous Driving
- nuScenes
- GPT-4 Enhanced Multimodal Grounding for Autonomous Driving: Leveraging Cross-Modal Attention with Large Language Models
- official
- video
- Talk2BEV: Language-enhanced Bird’s-eye View Maps for Autonomous Driving
- REvolve
- LC-LLM: Explainable Lane-Change Intention and Trajectory Predictions with Large Language Models
- LimSim++
-
Datasets
- WEDGE
- NuScenes-QA
- HAD dataset
- Rank2Tell: A Multimodal Driving Dataset for Joint Importance Ranking and Reasoning
- DriveLM
- WEDGE: A multi-weather autonomous driving dataset built from generative vision-language models
- NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario
- DRAMA: Joint Risk Localization and Captioning in Driving
- Textual Explanations for Self-Driving Vehicles
- Grounding Human-To-Vehicle Advice for Self-Driving Vehicles
- official
- DriveLM: Drive on Language
- BDD-X dataset
- Language Prompt for Autonomous Driving
- DRAMA
- Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
-
WorkShop
Programming Languages
Categories
Sub Categories
Keywords
autonomous-driving
10
large-language-models
5
llm
4
carla-simulator
4
gpt-4
3
dataset
3
computer-vision
3
reinforcement-learning
2
birds-eye-view
2
large-vision-language-models
2
occupancy-grid-map
2
autonomous-vehicles
2
carla-driving-simulator
2
digitaltwin
2
drone
2
vision-language-navigation
2
openclip
2
end-to-end-autonomous-driving
2
vision-language
2
trajectory
2
sumo
2
open
2
simulation
2
open-data
2
gym-environment
1
airsim
1
emergency-vehicles
1
intelligent-transportation-systems
1
ai
1
scenic
1
traffic-signal-control
1
3d
1
3d-reconstruction
1
3d-vision
1
auto
1
carla
1
llm-agent
1
unreal-engine
1
simulator
1
self-driving-car
1
research
1
platform-independent
1
pixhawk
1
drones
1
deeplearning
1
deep-reinforcement-learning
1
cross-platform
1
artificial-intelligence
1
control-systems
1
visual-language-learning
1