Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/phellonchen/awesome-visual-dialog

Recent Advances in Visual Dialog
https://github.com/phellonchen/awesome-visual-dialog

List: awesome-visual-dialog

guesswhat multimodal-deep-learning multimodal-dialogue visual-dialog

Last synced: 3 months ago
JSON representation

Recent Advances in Visual Dialog

Awesome Lists containing this project

README

        

# awesome-Visual Dialog
Recent Advances in Visual Dialog
Maintained by Feilong Chen. Last update on 2022/08/19.

## Table of Contents

* [Image-based Visual Dialog](#Image-based-Visual-Dialog)
* [Video-based Visual Dialog](#video-based-Visual-Dialog)
* [Other Resources](#other-resources)

# Image-based Visual Dialog

## Visual Dialog

1. [Visual Dialog](https://arxiv.org/abs/1611.08669), CVPR 2017, [[code]](https://github.com/batra-mlp-lab/visdial)

2. [Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model](https://arxiv.org/abs/1706.01554), NIPS 2017, [[code]](https://github.com/jiasenlu/visDial.pytorch)

3. [Are You Talking to Me? Reasoned Visual Dialog Generation through Adversarial Learning](https://arxiv.org/abs/1711.07613), CVPR 2018

4. [Image-Question-Answer Synergistic Network for Visual Dialog](https://arxiv.org/abs/1902.09774), CVPR 2019

5. [Reasoning Visual Dialogs with Structural and Partial Observations](https://arxiv.org/abs/1904.05548), CVPR, 2019, [[code]](https://github.com/zilongzheng/visdial-gnn)

6. [Recursive Visual Attention in Visual Dialog](https://arxiv.org/abs/1812.02664), CVPR 2019, [[code]](https://github.com/yuleiniu/rva)

7. [Dual Visual Attention Network for Visual Dialog](), IJCAI 2019

8. [Making History Matter: History-Advantage Sequence Training for Visual Dialog](https://arxiv.org/abs/1902.09326), ICCV 2019

9. [Granular Multimodal Attention Networks for Visual Dialog](https://arxiv.org/abs/1910.05728), ICCV Workshop 2019

10. [Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog](https://arxiv.org/abs/1902.00579), ACL 2019

11. [Dual Attention Networks for Visual Reference Resolution in Visual Dialog](https://arxiv.org/abs/1902.09368), EMNLP 20219, [[]code](https://github.com/gicheonkang/dan-visdial)

12. [DMRM: A Dual-channel Multi-hop Reasoning Model for Visual Dialog](https://arxiv.org/abs/1912.08360), AAAI 2020, [[code]](https://github.com/phellonchen/DMRM)

13. [Modality-Balanced Models for Visual Dialogue](https://arxiv.org/abs/2001.06354), AAAI 2020

14. [DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual Dialogue](https://arxiv.org/abs/1911.07251), AAAI 2020, [[code]](https://github.com/JXZe/DualVD)

15. [Two Causal Principles for Improving Visual Dialog](https://arxiv.org/abs/1911.10496), CVPR 2020, [[code]](https://github.com/simpleshinobu/visdial-principles)

16. [DAM: Deliberation, Abandon and Memory Networks for Generating Detailed and Non-repetitive Responses in Visual Dialogue](https://arxiv.org/abs/2007.03310), IJCAI 2020, [[code]](https://github.com/JXZe/DAM)

17. [KBGN: Knowledge-Bridge Graph Network for Adaptive Vision-Text Reasoning in Visual Dialogue](https://arxiv.org/abs/2008.04858), ACM MM 2020

18. [Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art Baseline](https://arxiv.org/abs/1912.02379), ECCV 2020, [[code]](https://github.com/vmurahari3/visdial-bert)

19. [Visual Dialog: Light-weight Transformer for Many Inputs](https://arxiv.org/abs/1911.11390), ECCV 2020, [[code]](https://github.com/davidnvq/visdial)

20. [Multi-View Attention Network for Visual Dialog](https://arxiv.org/abs/2004.14025), ACL 2020, [[code]](https://github.com/taesunwhang/MVAN-VisDial)

21. [History for Visual Dialog: Do we really need it?](https://aclanthology.org/2020.acl-main.728/), ACL 2020, [[code]](https://github.com/shubhamagarwal92/visdial_conv)

22. [VD-BERT: A Unified Vision and Dialog Transformer with BERT](https://arxiv.org/abs/2004.13278), EMNLP 2020, [[code]](https://github.com/salesforce/VD-BERT)

23. [GoG: Graph-over-Graph Network for Visual Dialog](https://aclanthology.org/2021.findings-acl.20/), ACL Findings 2021

24. [Multimodal Incremental Transformer for Visual Dialogue Generation](https://aclanthology.org/2021.findings-acl.38/), ACL Findings 2021

25. [Learning to Ground Visual Objects for Visual Dialog](https://arxiv.org/abs/2109.06013), EMNLP Findings 2021

26. [VU-BERT: A Unified framework for Visual Dialog](https://arxiv.org/abs/2202.10787), ICASSP 2022

27. [Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning](https://arxiv.org/abs/2204.07302), ICASSP 2022

28. [UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog](https://arxiv.org/abs/2205.00423), CVPR 2022

29. [Unsupervised and Pseudo-Supervised Vision-Language Alignment in Visual Dialog](https://arxiv.org/abs/2205.00423), ACM MM 2022

## GuessWhat
[GuessWhat?! Visual object discovery through multi-modal dialogue](https://arxiv.org/abs/1611.08481), CVPR 2017, [[code]](https://github.com/GuessWhatGame/guesswhat)

## GuessWhich
[Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning](https://arxiv.org/abs/1703.06585), ICCV 2017, [[code]](https://github.com/batra-mlp-lab/visdial-rl)

# Video-based Visual Dialog

[Bridging Text and Video: A Universal Multimodal Transformer for Video-Audio Scene-Aware Dialog](https://arxiv.org/abs/2002.00163), AAAI 2020, [[code]](https://github.com/ictnlp/DSTC8-AVSD)

# Other Resources

* [Visual Dialog Homepage](https://visualdialog.org/)
* [Visual Dialog Challenge Startcode](https://github.com/batra-mlp-lab/visdial-challenge-starter-pytorch)