Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
PJLab-ADG-awesome-knowledge-driven-AD
https://github.com/2lambda123/PJLab-ADG-awesome-knowledge-driven-AD
- BDD-X - 4, METEOR, CIDEr-D |
- Cityscapes-Ref
- DR(eye)VE
- HAD
- Talk2Car
- DADA-2000
- HDBD
- Refer-KITTI
- DRAMA
- Rank2Tell - 4, METEOR, ROUGE, CIDER |
- DriveLM - Score |
- NuScenes-QA
- DESIGN - 4, METEOR, ROUGE, L2 Error, Collision Rate |
- Reason2Drive - 4, METEOR, ROUGE, CIDER|
- NuScenes-MQA - 4, METEOR, ROUGE|
- LangAuto
- DriveMLM - 4, METEOR, CIDER|
- NuInstruct - , Frame-, Ego-, Instance Information, Question Answering | Question Answering, Scene Captioning | MAE, Accuracy, BLEU-4, mAP |
- UniSim: A Neural Closed-Loop Sensor Simulator
- NuPlan: A closed-loop ML-based planning benchmark for autonomous vehicles - devkit)]
- DrivingDiffusion: Layout-Guided multi-view driving scene video generation with latent diffusion model
- OccWorld: Learning a 3D Occupancy World Model for Autonomous Driving
- ADriver-I: A General World Model for Autonomous Driving
- Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving - wm.github.io/), [Github](https://github.com/BraveGroup/Drive-WM)]
- WoVoGen: World Volume-aware Diffusion for Controllable Multi-camera Driving Scene Generation - zvg/WoVoGen)]
- DriveDreamer: Towards Real-world-driven World Models for Autonomous Driving
- MagicDrive: Street View Generation with Diverse 3D Geometry Control
- GAIA-1: A Generative World Model for Autonomous Driving
- Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research - research/waymax)]
- MUVO: A Multimodal Generative World Model for Autonomous Driving with Geometric Representations
- Natural-language-driven Simulation Benchmark and Copilot for Efficient Production of Object Interactions in Virtual Road Scenes
- LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs
- DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes
- OccNeRF: Self-Supervised Multi-Camera Occupancy Prediction with Neural Radiance Fields
- `coming soon`, [Github
- Neural Lighting Simulation for Urban Scenes
- Street Gaussians for Modeling Dynamic Urban Scenes
- Grounding human-to-vehicle advice for self-driving vehicles
- ADAPT: Action-aware Driving Caption Transformer
- Talk to the Vehicle: Language Conditioned Autonomous Navigation of Self Driving Cars
- Talk2Car: Taking Control of Your Self-Driving Car - IJNLP 2019`, [Project](https://macchina-ai.cs.kuleuven.be/)]
- Textual explanations for self-driving vehicles - deep-driving)]
- Drive Like a Human: Rethinking Autonomous Driving with Large Language Models - ADG/DriveLikeAHuman)]
- DriveGPT4: Interpretable End-to-end Autonomous Driving via Large Language Model
- DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models - ADG/DiLu)]
- GPT-Driver: Learning to Drive with GPT - Driver)]
- Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving - with-LLMs)]
- LanguageMPC: Large Language Models as Decision Makers for Autonomous Driving - mpc)]
- Receive, Reason, and React: Drive as You Say with Large Language Models in Autonomous Vehicles
- Drive as You Speak: Enabling Human-Like Interaction with Large Language Models in Autonomous Vehicles
- SurrealDriver: Designing Generative Driver Agent Simulation Framework in Urban Contexts based on Large Language Model
- Language-Guided Traffic Simulation via Scene-Level Diffusion
- Language Prompt for Autonomous Driving
- Talk2BEV: Language-Enhanced Bird's Eye View (BEV) Maps
- BEVGPT: Generative Pre-trained Large Model for Autonomous Driving Prediction, Decision-Making, and Planning
- HiLM-D: Towards High-Resolution Understanding in Multimodal Large Language Models for Autonomous Driving
- Can you text what is happening? Integrating pre-trained language encoders into trajectory prediction models for autonomous driving
- OpenAnnotate3D: Open-Vocabulary Auto-Labeling System for Multi-modal 3D Data - ProjectTitan/OpenAnnotate3D)]
- LangProp: A Code Optimization Framework Using Language Models Applied to Driving - iclr24/LangProp)]
- Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion
- Planning with an Ensemble of World Models
- Large Language Models Can Design Game-Theoretic Objectives for Multi-Agent Planning
- TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction
- BEV-CLIP: Multi-Modal BEV Retrieval Methodology for Complex Scene in Autonomous Driving
- Large Language Models Can Design Game-theoretic Objectives for Multi-Agent Planning
- Semantic Anomaly Detection with Large Language Models
- Driving through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving
- Drama: Joint risk localization and captioning in driving
- 3D Dense Captioning Beyond Nouns: A Middleware for Autonomous Driving
- SwapTransformer: Highway Overtaking Tactical Planner Model via Imitation Learning on OSHA Dataset
- NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario - QA)]
- Language Prompt for Autonomous Driving
- Drive Anywhere: Generalizable End-to-end Autonomous Driving with Multi-modal Foundation Models
- Addressing Limitations of State-Aware Imitation Learning for Autonomous Driving
- A Language Agent for Autonomous Driving
- Human-Centric Autonomous Systems With LLMs for User Command Reasoning
- On the Road with GPT-4V (ision): Early Explorations of Visual-Language Model on Autonomous Driving
- Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving - zvg/Reason2Drive)]
- GPT-4 Enhanced Multimodal Grounding for Autonomous Driving: Leveraging Cross-Modal Attention with Large Language Models
- ChatGPT as Your Vehicle Co-Pilot: An Initial Attempt
- DriveLLM: Charting The Path Toward Full Autonomous Driving with Large Language Models
- NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets using Markup Annotations - MQA)]
- Evaluation of Large Language Models for Decision Making in Autonomous Driving
- LMDrive: Closed-Loop End-to-End Driving with Large Language Models
- DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral Planning States for Autonomous Driving
- Large Language Models for Autonomous Driving: Real-World Experiments
- LingoQA: Video Question Answering for Autonomous Driving
- DriveLM: Driving with Graph Visual Question Answering
- LLM-Assist: Enhancing Closed-Loop Planning with Language-Based Reasoning
- Holistic Autonomous Driving Understanding by Bird’s-Eye-View Injected Multi-Modal Large Models - lab/NuInstruct)]
- BEV-CLIP: Multi-modal BEV Retrieval Methodology for Complex Scene in Autonomous Driving
- Applications of Large Scale Foundation Models for Autonomous Driving
- A Survey on Multimodal Large Language Models for Autonomous Driving
- A Survey of Large Language Models for Autonomous Driving
- Vision Language Models in Autonomous Driving and Intelligent Transportation Systems
- A Survey of Simulators for Autonomous Driving: Taxonomy, Challenges, and Evaluation Metric
- Towards Knowledge-driven Autonomous Driving
- WACV2024 Workshop
- Blog
- Blog
- Blog
Programming Languages