Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome_Information_Extraction
Literature Survey of Information Extraction, especially Relation Extraction, Event Extraction, and Slot Filling.
https://github.com/wutong8023/Awesome_Information_Extraction
Last synced: 5 days ago
JSON representation
-
ACL
- ![ - short.38)<a href="https://scholar.google.com.hk/scholar?q=PARE:+A+Simple+and+Strong+Baseline+for+Monolingual+and+Multilingual+Distantly+Supervised+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**PARE: A Simple and Strong Baseline for Monolingual and Multilingual Distantly Supervised Relation Extraction**](https://aclanthology.org/2022.acl-short.38) , <br> by *Rathore, Vipul and
- ![ - main.667)<a href="https://scholar.google.com.hk/scholar?q=A+Two-Step+Approach+for+Implicit+Event+Argument+Detection"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Two-Step Approach for Implicit Event Argument Detection**](https://doi.org/10.18653/v1/2020.acl-main.667) , <br> by *Zhisong Zhang and
- ![ - acl.214)<a href="https://scholar.google.com.hk/scholar?q=Adaptive+Knowledge-Enhanced+Bayesian+Meta-Learning+for+Few-shot+Event+Detection"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Adaptive Knowledge-Enhanced Bayesian Meta-Learning for Few-shot Event
- ![ - 1047/)<a href="https://scholar.google.com.hk/scholar?q=Extracting+Relational+Facts+by+an+End-to-End+Neural+Model+with+Copy+Mechanism"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Extracting Relational Facts by an End-to-End Neural Model with Copy
- ![ - 1017)<a href="https://scholar.google.com.hk/scholar?q=Creating+Training+Corpora+for+NLG+Micro-Planners"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Creating Training Corpora for NLG Micro-Planners**](https://doi.org/10.18653/v1/P17-1017) , <br> by *Claire Gardent and
- ![ - 1056/)<a href="https://scholar.google.com.hk/scholar?q=Exploiting+Syntactico-Semantic+Structures+for+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Exploiting Syntactico-Semantic Structures for Relation Extraction**](https://aclanthology.org/P11-1056/) , <br> by *Yee Seng Chan and
- ![ - 1053/)<a href="https://scholar.google.com.hk/scholar?q=Exploring+Various+Knowledge+in+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Exploring Various Knowledge in Relation Extraction**](https://aclanthology.org/P05-1053/) , <br> by *Guodong Zhou and
- ![ - 1038)<a href="https://scholar.google.com.hk/scholar?q=Incremental+Joint+Extraction+of+Entity+Mentions+and+Relations"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Incremental Joint Extraction of Entity Mentions and Relations**](https://doi.org/10.3115/v1/p14-1038) , <br> by *Qi Li and
- ![ - 1074)<a href="https://scholar.google.com.hk/scholar?q=DocRED:+A+Large-Scale+Document-Level+Relation+Extraction+Dataset"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**DocRED: A Large-Scale Document-Level Relation Extraction Dataset**](https://www.aclweb.org/anthology/P19-1074) , <br> by *Yao, Yuan and
- ![ - short.126)<a href="https://scholar.google.com.hk/scholar?q=Three+Sentences+Are+All+You+Need:+Local+Path+Enhanced+Document+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Three Sentences Are All You Need: Local Path Enhanced Document Relation Extraction**](https://aclanthology.org/2021.acl-short.126) , <br> by *Huang, Quzhe and
-
arXiv
- ![ - supervised+Relation+Extraction+via+Incremental+Meta+Self-Training"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Semi-supervised Relation Extraction via Incremental Meta Self-Training**](https://arxiv.org/abs/2010.16410) , <br> by *Xuming Hu and
- ![ - blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Good Visual Guidance Makes A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction**](https://arxiv.org/abs/2205.03521) , <br> by *Chen, Xiang, Zhang, Ningyu, Li, Lei, Yao, Yunzhi, Deng, Shumin, Tan, Chuanqi, Huang, Fei, Si, Luo and Chen, Huajun* [[bib]](https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/./bibtex.bib#L2799-L2812) </details><details><summary><img src=https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```2205.03521```
- ![ - tune+Bert+for+DocRED+with+Two-step+Process"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Fine-tune Bert for DocRED with Two-step Process**](http://arxiv.org/abs/1909.11898) , <br> by *Hong Wang and
- ![ - blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**DRILL: Dynamic Representations for Imbalanced Lifelong Learning**](https://arxiv.org/abs/2105.08445) , <br> by *Kyra Ahrens and
-
IJCAI
- ![ - blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Consistent Inference for Dialogue Relation Extraction**](https://doi.org/10.24963/ijcai.2021/535) , <br> by *Long, Xinwei, Niu, Shuzi and Li, Yucheng* [[bib]](https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/./bibtex.bib#L1067-L1074) </details><details><summary><img src=https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```ijcai2021-535```
-
Outline
-
EMNLP
- ![ - 1010/)<a href="https://scholar.google.com.hk/scholar?q=Kernel+Methods+for+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Kernel Methods for Relation Extraction**](https://aclanthology.org/W02-1010/) , <br> by *Dmitry Zelenko and
- ![ - 1514)<a href="https://scholar.google.com.hk/scholar?q=FewRel:+A+Large-Scale+Supervised+Few-shot+Relation+Classification+Dataset+with+State-of-the-Art+Evaluation"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**FewRel: A Large-Scale Supervised Few-shot Relation Classification
- ![ - 1200)<a href="https://scholar.google.com.hk/scholar?q=Modeling+Joint+Entity+and+Relation+Extraction+with+Table+Representation"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Modeling Joint Entity and Relation Extraction with Table Representation**](https://doi.org/10.3115/v1/d14-1200) , <br> by *Makoto Miwa and
- ![ - main.271)<a href="https://scholar.google.com.hk/scholar?q=Cost-effective+End-to-end+Information+Extraction+for+Semi-structured+Document+Images"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Cost-effective End-to-end Information Extraction for Semi-structured Document Images**](https://aclanthology.org/2021.emnlp-main.271) , <br> by *Hwang, Wonseok and
- ![ - main.303)<a href="https://scholar.google.com.hk/scholar?q=Global-to-Local+Neural+Networks+for+Document-Level+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Global-to-Local Neural Networks for Document-Level Relation Extraction**](https://doi.org/10.18653/v1/2020.emnlp-main.303) , <br> by *Difeng Wang and
- ![ - 1498)<a href="https://scholar.google.com.hk/scholar?q=Connecting+the+Dots:+Document-level+Neural+Relation+Extraction+with+Edge-oriented+Graphs"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Connecting the Dots: Document-level Neural Relation Extraction with
-
NAACL
- ![ - main.69/)<a href="https://scholar.google.com.hk/scholar?q=Document-Level+Event+Argument+Extraction+by+Conditional+Generation"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Document-Level Event Argument Extraction by Conditional Generation**](https://www.aclweb.org/anthology/2021.naacl-main.69/) , <br> by *Sha Li and
- ![ - 2401)<a href="https://scholar.google.com.hk/scholar?q=A+Linear+Programming+Formulation+for+Global+Inference+in+Natural+Language+Tasks"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**A Linear Programming Formulation for Global Inference in Natural Language Tasks**](https://aclanthology.org/W04-2401) , <br> by *Roth, Dan and
- ![ - main.276)<a href="https://scholar.google.com.hk/scholar?q=GMN:+Generative+Multi-modal+Network+for+Practical+Document+Information+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**GMN: Generative Multi-modal Network for Practical Document Information Extraction**](https://aclanthology.org/2022.naacl-main.276) , <br> by *Cao, Haoyu and
- ![ - main.452)<a href="https://scholar.google.com.hk/scholar?q=Open+Hierarchical+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Open Hierarchical Relation Extraction**](https://www.aclweb.org/anthology/2021.naacl-main.452) , <br> by *Zhang, Kai and
-
COLING
- ![ - 1239/)<a href="https://scholar.google.com.hk/scholar?q=Table+Filling+Multi-Task+Recurrent+Neural+Network+for+Joint+Entity+and+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Table Filling Multi-Task Recurrent Neural Network for Joint Entity
- ![ - main.136)<a href="https://scholar.google.com.hk/scholar?q=Graph+Enhanced+Dual+Attention+Network+for+Document-Level+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Graph Enhanced Dual Attention Network for Document-Level Relation
-
AACL
- ![ - main.75/)<a href="https://scholar.google.com.hk/scholar?q=More+Data,+More+Relations,+More+Context+and+More+Openness:+A+Review+and+Outlook+for+Relation+Extraction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**More Data, More Relations, More Context and More Openness: A Review
-
ICML
- ![ - shot+Relation+Extraction+via+Bayesian+Meta-learning+on+Relation+Graphs"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs**](http://proceedings.mlr.press/v119/qu20a.html) , <br> by *Qu, Meng, Gao, Tianyu, Xhonneux, Louis-Pascal and Tang, Jian* [[bib]](https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/./bibtex.bib#L2605-L2612) </details><details><summary><img src=https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```pmlr-v119-qu20a```
-
ICLR
- ![ - blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Prototypical Representation Learning for Relation Extraction**](https://openreview.net/forum?id=aCgLmfhIy\_f) , <br> by *Ning Ding and
-
NeurIPS
- ![ - Abstract.html)<a href="https://scholar.google.com.hk/scholar?q=Knowledge+Extraction+with+No+Observable+Data"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Knowledge Extraction with No Observable Data**](https://proceedings.neurips.cc/paper/2019/hash/596f713f9a7376fe90a62abaaedecc2d-Abstract.html) , <br> by *Jaemin Yoo and
-
ECML
- ![ - 3-642-15939-8\_10)<a href="https://scholar.google.com.hk/scholar?q=Modeling+Relations+and+Their+Mentions+without+Labeled+Text"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Modeling Relations and Their Mentions without Labeled Text**](https://doi.org/10.1007/978-3-642-15939-8\_10) , <br> by *Sebastian Riedel and
-
MM
- ![ - blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Multimodal Relation Extraction with Efficient Graph Alignment**](https://doi.org/10.1145/3474085.3476968) , <br> by *Changmeng Zheng and
-
SIGIR
- ![ - Shot+Event+Detection"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Graph Learning Regularization and Transfer Learning for Few-Shot Event Detection**](https://doi.org/10.1145/3404835.3463054) , <br> by *Lai, Viet Dac, Nguyen, Minh Van, Nguyen, Thien Huu and Dernoncourt, Franck* [[bib]](https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/./bibtex.bib#L1342-L1349) </details><details><summary><img src=https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```3404835.3463054```
-
KDD
- ![ - Enhanced+Domain+Adaptation+in+Few-Shot+Relation+Classification"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Knowledge-Enhanced Domain Adaptation in Few-Shot Relation Classification**](https://doi.org/10.1145/3447548.3467438) , <br> by *Zhang, Jiawen, Zhu, Jiaqi, Yang, Yi, Shi, Wandong, Zhang, Congcong and Wang, Hongan* [[bib]](https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/./bibtex.bib#L1287-L1294) </details><details><summary><img src=https://github.com/wutong8023/Awesome_Information_Extraction/blob/master/scripts/svg/copy_icon.png height="20" align="bottom"></summary><pre>```3447548.3467438```
-
AAAI
- ![ - Shot+Relation+Learning"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Neural Snowball for Few-Shot Relation Learning**](https://aaai.org/ojs/index.php/AAAI/article/view/6281) , <br> by *Tianyu Gao and
- ![ - Based+Prototypical+Networks+for+Noisy+Few-Shot+Relation+Classification"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation
- ![ - Level+Relation+Extraction+with+Reconstruction"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Document-Level Relation Extraction with Reconstruction**](https://ojs.aaai.org/index.php/AAAI/article/view/17667) , <br> by *Wang Xu and
-
ECAI
- ![ - Based+Joint+Entity+and+Relation+Extraction+with+Transformer+Pre-Training"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Span-Based Joint Entity and Relation Extraction with Transformer Pre-Training**](https://doi.org/10.3233/FAIA200321) , <br> by *Markus Eberts and
-
WWW
- ![ - blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**CoType: Joint Extraction of Typed Entities and Relations with Knowledge
-
TACL
- ![ - 1.42)<a href="https://scholar.google.com.hk/scholar?q=Revisiting+Few-shot+Relation+Classification:+Evaluation+Data+and+Classification+Schemes"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes**](https://aclanthology.org/2021.tacl-1.42) , <br> by *Sabo, Ofer and
- ![ - A+Large-Scale+Document-Level+RelatSentence+N-ary+Relation+Extraction+with+Graph+LSTMs"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Cross-A Large-Scale Document-Level RelatSentence N-ary Relation Extraction with Graph LSTMs**](https://transacl.org/ojs/index.php/tacl/article/view/1028) , <br> by *Nanyun Peng and
-
ACM Comput. Surv.
- ![ - blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Relation Extraction Using Distant Supervision: A Survey**](https://doi.org/10.1145/3241741) , <br> by *Alisa Smirnova and
-
NUSE-NAACL
- ![ - 1.4)<a href="https://scholar.google.com.hk/scholar?q=Document-level+Event+Extraction+with+Efficient+End-to-end+Learning+of+Cross-event+Dependencies"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Document-level Event Extraction with Efficient End-to-end Learning of Cross-event Dependencies**](https://www.aclweb.org/anthology/2021.nuse-1.4) , <br> by *Huang, Kung-Hsiang and
-
EACL
- ![ - main.128/)<a href="https://scholar.google.com.hk/scholar?q=Bootstrapping+Relation+Extractors+using+Syntactic+Search+by+Examples"><img src="https://img.shields.io/badge/-blue.svg?&logo=google-scholar&logoColor=white" height="18" align="bottom"></a> [**Bootstrapping Relation Extractors using Syntactic Search by Examples**](https://www.aclweb.org/anthology/2021.eacl-main.128/) , <br> by *Matan Eyal and