Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/daqingliu/awesome-vln
A curated list of research papers in Vision-Language Navigation (VLN)
https://github.com/daqingliu/awesome-vln
List: awesome-vln
arxiv awesome-list computer-vision natural-language-understanding papers vision-and-language vision-and-language-navigation
Last synced: 3 months ago
JSON representation
A curated list of research papers in Vision-Language Navigation (VLN)
- Host: GitHub
- URL: https://github.com/daqingliu/awesome-vln
- Owner: daqingliu
- License: mit
- Created: 2019-10-27T08:45:58.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2024-04-17T08:11:28.000Z (7 months ago)
- Last Synced: 2024-05-20T01:00:40.261Z (6 months ago)
- Topics: arxiv, awesome-list, computer-vision, natural-language-understanding, papers, vision-and-language, vision-and-language-navigation
- Homepage:
- Size: 40 KB
- Stars: 166
- Watchers: 17
- Forks: 31
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Computer-Vision - 14 - Supervised Learning](https://github.com/jason718/awesome-self-supervised-learning) (Awesome Computer Vision)
- ultimate-awesome - awesome-vln - A curated list of research papers in Vision-Language Navigation (VLN). (Other Lists / PowerShell Lists)
README
# Awesome Vision-Language Navigation
A curated list of research papers in Vision-Language Navigation (VLN). Link to the code and website if available is also present. You can also find more embodied vision papers in **[ awesome-embodied-vision](https://github.com/ChanganVR/awesome-embodied-vision)**.
## Contributing
Please feel free to contact me via email ([email protected]) or open an issue or submit a pull request.
To add a new paper via pull request:
1. Fork the repo, edit `README.md`.
2. Put the new paper at the correct chronological position as the following format:
```
- **Paper Title**
*Author(s)*
Conference, Year. [[Paper]](link) [[Code]](link) [[Website]](link)
```3. Send a pull request. Ideally, I will review the request within a week.
## Papers
### Tasks:
- **Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments**
*Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, Anton van den Hengel*
CVPR, 2018. [[Paper]](https://arxiv.org/abs/1711.07280) [[Code]](https://github.com/peteanderson80/Matterport3DSimulator) [[Website]](https://bringmeaspoon.org)- **HoME: a Household Multimodal Environment**
*Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville*
NIPS Workshop, 2017. [[Paper]](https://arxiv.org/abs/1711.11017) [[Code]](https://github.com/ml-lab/home-platform)- **Talk the Walk: Navigating New York City through Grounded Dialogue**
*Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, Douwe Kiela*
arXiv, 2019. [[Paper]](https://arxiv.org/abs/1807.03367) [[Code]](https://github.com/facebookresearch/talkthewalk)- **Touchdown: Natural Language Navigation and Spatial Reasoning in Visual Street Environments**
*Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, Yoav Artzi*
CVPR, 2019. [[Paper]](https://arxiv.org/abs/1811.12354) [[Code]](https://github.com/lil-lab/touchdown) [[Website]](https://sites.google.com/view/streetlearn/touchdown)- **Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention**
*Khanh Nguyen, Debadeepta Dey, Chris Brockett, Bill Dolan*
CVPR, 2019. [[Paper]](https://arxiv.org/abs/1812.04155) [[Code]](https://github.com/debadeepta/vnla) [[Video]](https://youtu.be/Vp6C29qTKQ0)- **Learning To Follow Directions in Street View**
*Karl Moritz Hermann, Mateusz Malinowski, Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, Raia Hadsell*
AAAI, 2020. [[Paper]](https://arxiv.org/abs/1903.00401) [[Website]](https://sites.google.com/view/streetlearn/aaai-2020)- **REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments**
*Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, Anton van den Hengel*
CVPR, 2020. [[Paper]](https://arxiv.org/abs/1904.10151)- **Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation**
*Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, Jason Baldridge*
ACL, 2019. [[Paper]](https://arxiv.org/abs/1905.12255) [[Code]](https://github.com/google-research/google-research/tree/master/r4r)- **Vision-and-Dialog Navigation**
*Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer*
CoRL, 2019. [[Paper]](https://arxiv.org/abs/1907.04957) [[Website]](https://cvdn.dev/)- **Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning**
*Khanh Nguyen, Hal Daumé III*
EMNLP, 2019. [[Paper]](https://arxiv.org/abs/1909.01871) [[Code]](https://github.com/khanhptnk/hanna) [[Video]](https://youtu.be/18P94aaaLKg)- **Talk2Nav: Long-Range Vision-and-Language Navigation with Dual Attention and Spatial Memory**
*Arun Balajee Vasudevan, Dengxin Dai, Luc Van Gool*
arXiv, 2019. [[Paper]](https://arxiv.org/abs/1910.02029) [[Website]](https://people.ee.ethz.ch/~arunv/talk2nav.html)- **Cross-Lingual Vision-Language Navigation**
*An Yan, Xin Wang, Jiangtao Feng, Lei Li, William Yang Wang*
arXiv, 2019. [[Paper]](https://arxiv.org/abs/1910.11301) [[Code]](https://github.com/zzxslp/Crosslingual-VLN)- **Beyond the Nav-Graph: Vision-and-Language Navigation in Continuous Environments**
*Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, Stefan Lee*
arXiv, 2020. [[Paper]](https://arxiv.org/abs/2004.02857) [[Code]](https://github.com/jacobkrantz/VLN-CE)- **Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation**
*Muhammad Zubair Irshad, Chih-Yao Ma, Zsolt Kira*
ICRA, 2021. [[Paper]](https://arxiv.org/abs/2104.10674) [[Code]](https://github.com/GT-RIPL/robo-vln)### Roadmap (Chronological Order):
- **Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments**
*Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, Anton van den Hengel*
CVPR, 2018. [[Paper]](https://arxiv.org/abs/1711.07280) [[Code]](https://github.com/peteanderson80/Matterport3DSimulator) [[Website]](https://bringmeaspoon.org)
- **Look Before You Leap: Bridging Model-Free and Model-Based Reinforcement Learning for Planned-Ahead Vision-and-Language Navigation**
*Xin Wang, Wenhan Xiong, Hongmin Wang, William Yang Wang*
ECCV, 2018. [[Paper]](https://arxiv.org/abs/1803.07729)
- **Speaker-Follower Models for Vision-and-Language Navigation**
*Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell*
NeurIPS, 2018. [[Paper]](https://arxiv.org/abs/1806.02724) [[Code]](https://github.com/ronghanghu/speaker_follower) [[Website]](http://ronghanghu.com/speaker_follower/)
- **Shifting the Baseline: Single Modality Performance on Visual Navigation & QA**
*Jesse Thomason, Daniel Gordon, Yonatan Bisk*
NAACL, 2019. [[Paper]](https://arxiv.org/abs/1811.00613) [[Poster]](https://jessethomason.com/personal_site/www/publication_supplements/NAACL19_poster.pdf)
- **Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation**
*Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, Lei Zhang*
CVPR, 2019. [[Paper]](https://arxiv.org/abs/1811.10092)
- **Self-Monitoring Navigation Agent via Auxiliary Progress Estimation**
*Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan AlRegib, Zsolt Kira, Richard Socher, Caiming Xiong*
ICLR, 2019. [[Paper]](https://arxiv.org/abs/1901.03035) [[Code]](https://github.com/chihyaoma/selfmonitoring-agent) [[Website]](https://chihyaoma.github.io/project/2018/09/27/selfmonitoring.html)
- **The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation**
*Chih-Yao Ma, Zuxuan Wu, Ghassan AlRegib, Caiming Xiong, Zsolt Kira*
CVPR, 2019. [[Paper]](https://arxiv.org/abs/1903.01602) [[Code]](https://github.com/chihyaoma/regretful-agent) [[Website]](https://chihyaoma.github.io/project/2019/02/25/regretful.html)
- **Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation**
*Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa*
CVPR, 2019. [[Paper]](http://arxiv.org/abs/1903.02547) [[Code]](https://github.com/Kelym/FAST) [[Video]](https://www.youtube.com/watch?v=AD9TNohXoPA)
- **Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout**
*Hao Tan, Licheng Yu, Mohit Bansal*
NAACL, 2019. [[Paper]](https://arxiv.org/abs/1904.04195) [[Code]](https://github.com/airsplay/R2R-EnvDrop)
- **Multi-modal Discriminative Model for Vision-and-Language Navigation**
*Haoshuo Huang, Vihan Jain, Harsh Mehta, Jason Baldridge, Eugene Ie*
NAACL Workshop, 2019. [[Paper]](https://arxiv.org/abs/1905.13358)
- **Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation**
*Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, Kate Saenko*
ACL, 2019. [[Paper]](https://arxiv.org/abs/1906.00347)
- **Chasing Ghosts: Instruction Following as Bayesian State Tracking**
*Peter Anderson, Ayush Shrivastava, Devi Parikh, Dhruv Batra, Stefan Lee*
NeurIPS, 2019. [[Paper]](https://arxiv.org/abs/1907.02022) [[Code]](https://github.com/batra-mlp-lab/vln-chasing-ghosts) [[Video]](https://www.youtube.com/watch?v=eoGbescCNP0)
- **Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters**
*Federico Landi, Lorenzo Baraldi, Massimiliano Corsini, Rita Cucchiara*
BMVC, 2019. [[Paper]](https://arxiv.org/abs/1907.02985) [[Code]](https://github.com/aimagelab/DynamicConv-agent)
- **Transferable Representation Learning in Vision-and-Language Navigation**
*Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, Eugene Ie*
ICCV, 2019. [[Paper]](https://arxiv.org/abs/1908.03409)
- **Robust Navigation with Language Pretraining and Stochastic Sampling**
*Xiujun Li, Chunyuan Li, Qiaolin Xia, Yonatan Bisk, Asli Celikyilmaz, Jianfeng Gao, Noah Smith, Yejin Choi*
EMNLP, 2019. [[Paper]](https://arxiv.org/abs/1909.02244) [[Code]](https://github.com/xjli/r2r_vln)
- **Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling**
*Tsu-Jui Fu, Xin Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang*
arXiv, 2019. [[Paper]](https://arxiv.org/abs/1911.07308)
- **Unsupervised Reinforcement Learning of Transferable Meta-Skills for Embodied Navigation**
*Juncheng Li, Xin Wang, Siliang Tang, Haizhou Shi, Fei Wu, Yueting Zhuang, William Yang Wang*
CVPR, 2020. [[Paper]](https://arxiv.org/abs/1911.07450)- **Vision-Language Navigation with Self-Supervised Auxiliary Reasoning Tasks**
*Fengda Zhu, Yi Zhu, Xiaojun Chang, Xiaodan Liang*
CVPR, 2020. [[Paper]](https://arxiv.org/abs/1911.07883)
- **Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation**
*Federico Landi, Lorenzo Baraldi, Marcella Cornia, Massimiliano Corsini, Rita Cucchiara*
arXiv, 2019. [[Paper]](https://arxiv.org/abs/1911.12377) [[Code]](https://github.com/aimagelab/perceive-transform-and-act)
- **Just Ask: An Interactive Learning Framework for Vision and Language Navigation**
*Ta-Chung Chi, Mihail Eric, Seokhwan Kim, Minmin Shen, Dilek Hakkani-tur*
AAAI, 2020. [[Paper]](https://arxiv.org/abs/1912.00915)
- **Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training**
*Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, Jianfeng Gao*
CVPR, 2020. [[Paper]](https://arxiv.org/abs/2002.10638) [[Code]](https://github.com/weituo12321/PREVALENT)
- **Multi-View Learning for Vision-and-Language Navigation**
*Qiaolin Xia, Xiujun Li, Chunyuan Li, Yonatan Bisk, Zhifang Sui, Jianfeng Gao, Yejin Choi, Noah A. Smith*
arXiv, 2020. [[Paper]](https://arxiv.org/abs/2003.00857)
- **Vision-Dialog Navigation by Exploring Cross-modal Memory**
*Yi Zhu, Fengda Zhu, Zhaohuan Zhan, Bingqian Lin, Jianbin Jiao, Xiaojun Chang, Xiaodan Liang*
CVPR, 2020. [[Paper]](https://arxiv.org/abs/2003.06745) [[Code]](https://github.com/yeezhu/CMN.pytorch)
- **Take the Scenic Route: Improving Generalization in Vision-and-Language Navigation**
*Felix Yu, Zhiwei Deng, Karthik Narasimhan, Olga Russakovsky*
arXiv, 2020. [[Paper]](https://arxiv.org/abs/2003.14269)
- **Sub-Instruction Aware Vision-and-Language Navigation**
*Yicong Hong, Cristian Rodriguez-Opazo, Qi Wu, Stephen Gould*
arXiv, 2020. [[Paper]](https://arxiv.org/abs/2004.02707)
- **Beyond the Nav-Graph: Vision-and-Language Navigation in Continuous Environments**
*Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, Stefan Lee*
ECCV, 2020. [[Paper]](https://arxiv.org/abs/2004.02857) [[Code]](https://github.com/jacobkrantz/VLN-CE) [[Website]](https://jacobkrantz.github.io/vlnce)
- **Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling**
*Tsu-Jui Fu, Xin Eric Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang*
ECCV, 2020. [[Paper]](https://arxiv.org/abs/1911.07308)
- **Improving Vision-and-Language Navigation with Image-Text Pairs from the Web**
*Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, Dhruv Batra*
ECCV, 2020. [[Paper]](https://arxiv.org/abs/2004.14973)
- **Soft Expert Reward Learning for Vision-and-Language Navigation**
*Hu Wang, Qi Wu, Chunhua Shen*
ECCV, 2020. [[Paper]](https://arxiv.org/abs/2007.10835)
- **Active Visual Information Gathering for Vision-Language Navigation**
*Hanqing Wang, Wenguan Wang, Tianmin Shu, Wei Liang, Jianbing Shen*
ECCV, 2020. [[Paper]](https://arxiv.org/abs/2007.08037) [[Code]](https://github.com/HanqingWangAI/Active_VLN)
- **Environment-agnostic Multitask Learning for Natural Language Grounded Navigation**
*Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi*
ECCV, 2020. [[Paper]](https://arxiv.org/abs/2003.00443)- **Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation**
*Muhammad Zubair Irshad, Chih-Yao Ma, Zsolt Kira*
ICRA, 2021. [[Paper]](https://arxiv.org/abs/2104.10674) [[Code]](https://github.com/GT-RIPL/robo-vln) [[Website]](https://zubair-irshad.github.io/projects/robo-vln.html) [[Video]](https://www.youtube.com/watch?v=y16x9n_zP_4)