Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-World-Model
Collect some World Models for Autonomous Driving papers.
https://github.com/LMD0311/Awesome-World-Model
Last synced: 1 day ago
JSON representation
-
Papers
-
Survey
-
2024
- **CarFormer** - ai.github.io/CarFormer/)]
- **MARL-CCE**
- **MagicDrive** - lab/MagicDrive)]
- **RoboDreamer**
- **GenAD** - ov-file#genad-dataset-opendv-youtube)]
- **Panacea** - ad.github.io/)]
- **DriveDreamer-2**
- **3D-VLA**
- **Copilot4D**
- **SafeDreamer** - Alignment/SafeDreamer)]
- **ViDAR**
- **Drive-WM** - WM)]
- **OccWorld**
- **DriveDreamer**
- **OccSora**
- **OccLLaMA**
- **UMAD**
- **SEM2**
- **Vista**
- **MagicDrive3D**
- **CarDreamer** - dare/CarDreamer)]
- **Cam4DOCC** - ai/Cam4DOcc)]
- **Think2Drive**
- **DrivingDiffusion**
- **SimGen**
- **LAW**
- **Delphi** - autolab/Delphi)]
- **SubjectDrive**
- **DriveGenVLM**
- **Drive-OccWorld**
- **LidarDM**
- **DriveSim**
- **DriveWorld**
- **BEVWorld**
- **TOKEN**
- **UnO**
- **GenAD**
- **AdaptiveDriver**
-
World model original paper
-
Technical blog or video
-
2023
- **TrafficBots**
- **WoVoGen** - zvg/WoVoGen)]
- **CTT**
- **MUVO**
- **GAIA-1**
- **ADriver-I**
- **UniWorld**
-
2022
- **MILE**
- **Symphony**
- [Paper
- **Iso-Dream** - Dream)]
- **SEM2**
-
-
Other World Model Paper
-
2024
- **UrbanWorld**
- **DLLM**
- **Genie** - 2024/home)]
- **Sora**
- **V-JEPA** - feature-prediction-for-learning-visual-representations-from-video/)] [[Code](https://github.com/facebookresearch/jepa)]
- **LWM**
- [Paper
- **WorldDreamer**
- **DreamSmooth**
- [Paper - and-slot-encoding-for-wm)]
- **Puppeteer**
- **MoReFree**
- **R2I**
- **BWArea Model**
- **Pandora** - org/Pandora)]
- **WKM**
- **Diamond**
- **MAMBA**
- **Dreaming of Many Worlds** - prasanna/dreaming_of_many_worlds)]
- **ManiGaussian**
- **IWM**
- **MagicTime** - YuanGroup/MagicTime)]
- **TD-MPC2**
- **HarmonyDream**
- **EBWM**
- **LLM-Sim** - simulator)]
- [Paper
- **DWL**
- [Paper
- **Δ-IRIS** - iris)]
- **GenRL**
- **Newton**
- **Compete and Compose**
- [Paper
- **CityBench** - fib-lab/CityBench)]
- **CoDreamer**
- **AD3**
- **Hieros**
- **HRSSM**
- **HarmonyDream**
- **REM** - c/REM)]
- [Paper
- **PWM**
- **Predicting vs. Acting**
-
2022
- **Iso-Dream** - Dream)]
- **TD-MPC**
- **DayDreamer**
- **DreamingV2**
- **DreamerPro** - pro)]
- [Paper
-
2021
- **Dreaming**
- **DreamerV2** - torch)]
-
2023
- **IRIS**
- **STORM** - zhang/STORM)]
- **TWM**
- **Dynalang**
- **CoWorld**
- **DreamerV3** - torch)]
-
2020
- **DreamerV1** - pytorch)]
- **Plan2Explore** - pytorch)]
-
2018
-
-
2024
-
Technical blog or video
-
-
Workshop & Challenge
- `CVPR 2024 Workshop & Challenge | OpenDriveLab`
- `CVPR 2023 Workshop on Autonomous Driving` - page/1977/overview) using the [Argoverse 2 Sensor Dataset](https://www.argoverse.org/av2.html#sensor-link). Predict the spacetime occupancy of the world for the next 3 seconds.