Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/daqingliu/awesome-rec
A curated list of research papers in Referring Expression Comprehension (REC)
https://github.com/daqingliu/awesome-rec
List: awesome-rec
Last synced: about 1 month ago
JSON representation
A curated list of research papers in Referring Expression Comprehension (REC)
- Host: GitHub
- URL: https://github.com/daqingliu/awesome-rec
- Owner: daqingliu
- License: mit
- Created: 2020-08-08T13:18:43.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2021-05-13T08:37:22.000Z (over 3 years ago)
- Last Synced: 2024-05-19T12:01:56.089Z (7 months ago)
- Size: 21.5 KB
- Stars: 40
- Watchers: 4
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-described-object-detection - daqingliu/awesome-rec
- ultimate-awesome - awesome-rec - A curated list of research papers in Referring Expression Comprehension (REC). (Other Lists / Monkey C Lists)
README
# Awesome Referring Expression Comprehension
> Inspired by [awesome-grounding](https://github.com/TheShadow29/awesome-grounding) and this [survey](https://arxiv.org/pdf/2007.09554.pdf).
A curated list of research papers in Referring Expression Comprehension (REC). Link to the code and website if available is also present.
## Table of Contents
- [Contributing](#contributing)
- [Paper List](#paper-list)
- [Survey](#survey)
- [Dataset](#dataset)
- [2020](#2020)
- [2019](#2019)
- [2018](#2018)
- [2017](#2017)
- [2016](#2016)
- [Acknowledgement](#acknowledgement)## Paper List
### Survey
- **Referring Expression Comprehension : A Survey of Methods and Datasets**. *Yanyuan Qiao, Chaorui Deng, and Qi Wu*. arXiv, 2020. [[Paper]](https://arxiv.org/pdf/2007.09554.pdf)
### Dataset
- **[RefCOCOg] Generation and Comprehension of Unambiguous Object Descriptions**. *Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy*. CVPR, 2016. [[Paper]](https://arxiv.org/pdf/1511.02283.pdf) [[Code]](https://github.com/mjhucla/Google_Refexp_toolbox)
- **[RefCOCO, RefCOCO+] Modeling context in referring expressions**. *Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg*. ECCV, 2016. [[Paper]](https://arxiv.org/pdf/1608.00272.pdf) [[Code]](https://github.com/lichengunc/refer)
- **[CLEVR-Ref+] CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions**. *Runtao Liu, Chenxi Liu, Yutong Bai, and Alan Yuille*. CVPR, 2019. [[Paper]](https://arxiv.org/pdf/1901.00850.pdf) [[Code]](https://github.com/ccvl/clevr-refplus-dataset-gen) [[Website]](https://cs.jhu.edu/~cxliu/2019/clevr-ref+)
- **[Cops-Ref] Cops-Ref: A new Dataset and Task on Compositional Referring Expression Comprehension**. *Zhenfang Chen, Peng Wang, Lin Ma, Kwan-Yee K. Wong, and Qi Wu*. CVPR, 2020. [[Paper]](https://arxiv.org/pdf/2003.00403.pdf)[~~[Code]~~](https://github.com/zfchenUnique/Cops-Ref)
- **[Ref-Reasoning] Graph-Structured Referring Expression Reasoning in The Wild**. *Sibei Yang, Guanbin Li, and Yizhou Yu*. CVPR, 2020. [[Paper]](https://arxiv.org/pdf/2004.08814.pdf) [[Code]](https://github.com/sibeiyang/sgmn) [[Website]](https://sibeiyang.github.io/dataset/ref-reasoning/)### arXiv
- **(TransVG) TransVG: End-to-End Visual Grounding with Transformers**. *Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, Houqiang Li*. arXiv, 2021. [[Paper]](https://arxiv.org/pdf/2104.08541.pdf)
- **(ECIFA) Give Me Something to Eat: Referring Expression Comprehension with Commonsense Knowledge**. *Peng Wang, Dongyang Liu, Hui Li, and Qi Wu*. arXiv, 2020. [[Paper]](https://arxiv.org/pdf/2006.01629.pdf)
- **(JVGN) Joint Visual Grounding with Language Scene Graphs**. *Daqing Liu, Hanwang Zhang, Zheng-Jun Zha, Meng Wang, and Qianru Sun*. arXiv, 2019. [[Paper]](https://arxiv.org/pdf/1906.03561.pdf) *(I am an author of the paper)*
- **A Real-time Global Inference Network for One-stage Referring Expression Comprehension**. *Yiyi Zhou et al.* arXiv, 2019. [[Paper]](https://arxiv.org/pdf/1912.03478.pdf) [[Code]](https://github.com/luogen1996/Real-time-Global-Inference-Network)
- **(SGG) Real-Time Referring Expression Comprehension by Single-Stage Grounding Network**. *Xinpeng Chen, Lin Ma, Jingyuan Chen, Zequn Jie, Wei Liu, and Jiebo Luo*. arXiv, 2018. [[Paper]](https://arxiv.org/pdf/1812.03426v1.pdf)### 2020
- **Improving One-stage Visual Grounding by Recursive Sub-query Construction**. *Zhengyuan Yang, Tianlang Chen, Liwei Wang, and Jiebo Luo*. ECCV, 2020. [[Paper]](https://arxiv.org/pdf/2008.01059.pdf) [[Code]](https://github.com/zyang-ur/ReSC)
- **(LSCM) Linguistic Structure Guided Context Modeling for Referring Image Segmentation**. *Tianrui Hui et al.* ECCV, 2020. [[Paper]](http://colalab.org/media/paper/Linguistic_Structure_Guided_Context_Modeling_for_Referring_Image_Segmentation.pdf)
- **(BiLingUNet) BiLingUNet: Image Segmentation by Modulating Top-Down and Bottom-Up Visual Processing with Referring Expressions**. *Ozan Arkan Can, İlker Kesen, and Deniz Yuret*. ECCV, 2020. [[Paper]](https://arxiv.org/pdf/2003.12739.pdf)
- **(SGMN) Graph-Structured Referring Expression Reasoning in The Wild**. *Sibei Yang, Guanbin Li, and Yizhou Yu*. CVPR, 2020. [[Paper]](https://arxiv.org/pdf/2004.08814.pdf) [[Code]](https://github.com/sibeiyang/sgmn) [[Website]](https://sibeiyang.github.io/dataset/ref-reasoning/)
- **(MCN) Multi-task Collaborative Network for Joint Referring Expression Comprehension and Segmentation**. *Gen Luo et al.* CVPR, 2020. [[Paper]](https://arxiv.org/pdf/2003.08813.pdf) [[Code]](https://github.com/luogen1996/MCN)
- **(RCCF) A Real-Time Cross-modality Correlation Filtering Method for Referring Expression Comprehension**. *Yue Liao et al.* CVPR, 2020. [[Paper]](https://arxiv.org/pdf/1909.07072.pdf)
- **(LCMCG) Learning Cross-modal Context Graph for Visual Grounding**. *Yongfei Liu, Bo Wan, Xiaodan Zhu, and Xuming He*. AAAI, 2020. [[Code]](https://github.com/youngfly11/LCMCG-PyTorch)### 2019
- **(NMTree) Learning to Assemble Neural Module Tree Networks for Visual Grounding**. *Daqing Liu, Hanwang Zhang, Feng Wu, and Zheng-Jun Zha*. ICCV, 2019. [[Paper]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Learning_to_Assemble_Neural_Module_Tree_Networks_for_Visual_Grounding_ICCV_2019_paper.pdf) [[Code]](https://github.com/daqingliu/NMTree) *(I am an author of the paper)*
- **(RvG-Tree) Learning to Compose and Reason with Language Tree Structures for Visual Grounding**. *Richang Hong, Daqing Liu, Xiaoyu Mo, Xiangnan He, and Hanwang Zhang*. TPAMI, 2019. [[Paper]](https://arxiv.org/pdf/1906.01784.pdf) *(I am an author of the paper)*
- **(FAOA) A Fast and Accurate One-Stage Approach to Visual Grounding**. *Zhengyuan Yang, Boqing Gong, Liwei Wang, Wenbing Huang, Dong Yu, and Jiebo Luo*. ICCV, 2019. [[Paper]](https://arxiv.org/pdf/1908.06354.pdf) [[Code]](https://github.com/zyang-ur/onestage_grounding)
- **(DGA) Dynamic Graph Attention for Referring Expression Comprehension**. *Sibei Yang, Li Guanbin, and Yu Yizhou*. ICCV, 2019. [[Paper]](https://arxiv.org/pdf/1909.08164.pdf) [[Code]](https://github.com/sibeiyang/sgmn/tree/master/lib/dga_models)
- **(LCGN) Language-Conditioned Graph Networks for Relational Reasoning**. *Ronghang Hu, Anna Rohrbach, Trevor Darrell, and Kate Saenko*. ICCV, 2019. [[Paper]](https://arxiv.org/pdf/1905.04405.pdf) [[Code]](https://github.com/ronghanghu/lcgn)
- **See-through-text grouping for referring image segmentation**. *DIng Jie Chen, Songhao Jia, Yi Chen Lo, Hwann Tzong Chen, and Tyng Luh Liu*. ICCV, 2019. [[Paper]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_See-Through-Text_Grouping_for_Referring_Image_Segmentation_ICCV_2019_paper.pdf)
- **(CMRIN) Cross-Modal Relationship Inference for Grounding Referring Expressions**. *Sibei Yang, Guanbin Li, and Yizhou Yu*. CVPR, 2019. [[Paper]](https://arxiv.org/pdf/1906.04464.pdf)
- **(CM-Att-Erase) Improving Referring Expression Grounding with Cross-modal Attention-guided Erasing**. *Xihui Liu, Zihao Wang, Jing Shao, Xiaogang Wang, and Hongsheng Li*. CVPR, 2019. [[Paper]](https://arxiv.org/pdf/1903.00839.pdf)
- **(CMSA) Cross-Modal Self-Attention Network for Referring Image Segmentation**. *Linwei Ye, Mrigank Rochan, Zhi Liu, and Yang Wang*. CVPR, 2019. [[Paper]](https://arxiv.org/pdf/1904.04745.pdf) [[Code]](https://github.com/lwye/CMSA-Net)### 2018
- **(Multi-hop FiLM) Visual Reasoning with Multi-hop Feature Modulation**. *Florian Strub, Mathieu Seurin, Ethan Perez, and Harm De Vries*. ECCV, 2018. [[Paper]](https://arxiv.org/pdf/1808.04446.pdf)
- **(DDPN) Rethinking diversified and discriminative proposal generation for visual grounding**. *Zhou Yu, Jun Yu, Chenchao Xiang, Zhou Zhao, Qi Tian, and Dacheng Tao*. IJCAI, 2018. [[Paper]](https://www.ijcai.org/proceedings/2018/0155.pdf) [[Code]](https://github.com/XiangChenchao/DDPN)
- **(MAttNet) MAttNet: Modular Attention Network for Referring Expression Comprehension**. *Licheng Yu \*et al.** CVPR, 2018. [[Paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Yu_MAttNet_Modular_Attention_CVPR_2018_paper.pdf) [[Code]](https://github.com/lichengunc/MAttNet) [[Website]](http://vision2.cs.unc.edu/refer/comprehension)
- **(AccumAttn) Visual Grounding via Accumulated Attention**. *Chaorui Deng, Qi Wu, Qingyao Wu, Fuyuan Hu, Fan Lyu, and Mingkui Tan*. CVPR, 2018. [[Paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Deng_Visual_Grounding_via_CVPR_2018_paper.pdf)
- **(ParalAttn) Parallel Attention: A Unified Framework for Visual Object Discovery Through Dialogs and Queries**. *Bohan Zhuang, Qi Wu, Chunhua Shen, Ian Reid, and Anton Van Den Hengel*. CVPR, 2018. [[Paper]](https://arxiv.org/pdf/1711.06370.pdf) [[Code]](https://github.com/bohanzhuang/Parallel-Attention-A-Unified-Framework-for-Visual-Object-Discovery-through-Dialogs-and-Queries)
- **(LGRAN) Neighbourhood Watch: Referring Expression Comprehension via Language-guided Graph Attention Networks**. *Peng Wang, Qi Wu, Jiewei Cao, Chunhua Shen, Lianli Gao, and Anton van den Hengel*. CVPR, 2018. [[Paper]](https://arxiv.org/pdf/1812.04794.pdf)
- **(VariContext) Grounding Referring Expressions in Images by Variational Context**. *Hanwang Zhang, Yulei Niu, and Shih-Fu Chang*. CVPR, 2018. [[Paper]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Grounding_Referring_Expressions_CVPR_2018_paper.pdf) [[Code]](https://github.com/yuleiniu/vc/)
- **(GroundNet) Using Syntax to Ground Referring Expressions in Natural Images**. *Volkan Cirik, Taylor Berg-Kirkpatrick, and Louis-Philippe Morency*. AAAI, 2018. [[Paper]](https://arxiv.org/pdf/1805.10547.pdf) [[Code]](https://github.com/volkancirik/groundnet)### 2017
- **Recurrent Multimodal Interaction for Referring Image Segmentation**. *Chenxi Liu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, and Alan Yuille*. ICCV, 2017. [[Paper]](https://arxiv.org/pdf/1703.07939.pdf) [[Code]](https://github.com/chenxi116/TF-phrasecut-public)
- **(Attribute) Referring Expression Generation and Comprehension via Attributes**. *Jingyu Liu, Liang Wang, and Ming-Hsuan Yang*. ICCV, 2017. [[Paper]](http://faculty.ucmerced.edu/mhyang/papers/iccv2017_referring_expression.pdf)
- **(CMN) Modeling relationships in referential expressions with compositional modular networks**. *Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko*. CVPR, 2017. [[Paper]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Hu_Modeling_Relationships_in_CVPR_2017_paper.pdf) [[Code]](https://github.com/ronghanghu/cmn)
- **(Spe+Lis+RI) A Joint Speaker-Listener-Reinforcer Model for Referring Expressions**. *Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L. Berg*. CVPR, 2017. [[Paper]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Yu_A_Joint_Speaker-Listener-Reinforcer_CVPR_2017_paper.pdf) [[Code]](https://github.com/lichengunc/speaker_listener_reinforcer) [[Website]](https://vision.cs.unc.edu/refer/)
- **Comprehension-guided referring expressions**. *Ruotian Luo and Gregory Shakhnarovich*. CVPR, 2017. [[Paper]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Luo_Comprehension-Guided_Referring_Expressions_CVPR_2017_paper.pdf) [[Code]](https://github.com/ruotianluo/refexp-comprehension)### 2016
- **(MCB) Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding**. *Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach*. EMNLP, 2016. [[Paper]](https://arxiv.org/pdf/1606.01847.pdf) [[Code]](https://github.com/akirafukui/vqa-mcb)
- **(NegBag) Modeling context between objects for referring expression understanding**. *Varun K. Nagaraja, Vlad I. Morariu, and Larry S. Davis*. ECCV, 2016. [[Paper]](https://arxiv.org/pdf/1608.00525.pdf) [[Code]](https://github.com/varun-nagaraja/referring-expressions)
- **(VisDif) Modeling context in referring expressions**. *Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg*. ECCV, 2016. [[Paper]](https://arxiv.org/pdf/1608.00272.pdf) [[Code]](https://github.com/lichengunc/refer)
- **(SCRC) Natural Language Object Retrieval**. *Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell*. CVPR, 2016. [[Paper]](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Hu_Natural_Language_Object_CVPR_2016_paper.pdf) [[Code]](https://github.com/ronghanghu/natural-language-object-retrieval) [[Website]](http://ronghanghu.com/text_obj_retrieval/)- **(MMI) Generation and Comprehension of Unambiguous Object Descriptions**. *Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy*. CVPR, 2016. [[Paper]](https://arxiv.org/pdf/1511.02283.pdf) [[Code]](https://github.com/mjhucla/Google_Refexp_toolbox)
## Contributing
Please feel free to contact me via email ([email protected]) or open an issue or submit a pull request.
To add a new paper via pull request:
1. Fork the repo, edit `README.md`.
1. Put the new paper at the correct chronological position as the following format:
```
- **Paper Title**. *Author(s)*. Conference, Year. [[Paper]](link) [[Code]](link) [[Website]](link)
```
1. Send a pull request. Ideally, I will review the request within a week.## Acknowledgement
This repo is maintained by [Daqing LIU](http://home.ustc.edu.cn/~liudq/).
Other Awesome Vision-Language lists: [Awesome Vision-Languge Navigation](https://github.com/daqingliu/awesome-vln), [Awesome-Video-Captioning](https://github.com/tgc1997/Awesome-Video-Captioning).