Skip to content

a paper reading list on Document level Relation Extraction

Notifications You must be signed in to change notification settings

www-Ye/DocRE-reading-list

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

74 Commits
 
 
 
 

Repository files navigation

This is a paper reading list on Document level Relation Extraction.

Our list is still incomplete and the categorization might be inappropriate. We will keep adding papers and improving the list. Any suggestions are welcomed!

Datasets

  • DocRED, CDR, GDA

Doc RE Papers

2021

  1. EMNLP 2021. Learning Logic Rules for Document-level Relation Extraction. Dongyu Ru, Changzhi Sun, Jiangtao Feng, Lin Qiu, Hao Zhou, Weinan Zhang, Yong Yu, Lei Li.

  2. EMNLP 2021. Modular Self-Supervision for Document-Level Relation Extraction. Sheng Zhang, Cliff Wong, Naoto Usuyama, Sarthak Jain, Tristan Naumann, Hoifung Poon.
    👉 Method: Decompose document-level relation extraction into relation detection and argument resolution.

  3. ACL 2021. Three Sentences Are All You Need — Local Path Enhanced Document Relation Extraction, code. Quzhe Huang, Shengqi Zhu, Yansong Feng, Yuan Ye, Yuxuan Lai and Dongyan Zhao.
    👉 Method: Using huristic rules to select at most 3 sentences for an entity pair

  4. ACL 2021 Findings. SIRE: Separate Intra- and Inter-sentential Reasoning for Document-level Relation Extraction, code. Shuang Zeng, Yuting Wu, Baobao Chang.
    👉 Method: Seperate intra and inter relations: for intra, using sentence for mention pair representation, then aggregate all the mention pairs to entity pair representation; for inter, using the graph in GAIN. Furthermore, a novel logical reasoning method is used.

  5. ACL 2021 Findings. Discriminative Reasoning for Document-level Relation Extraction, code. Wang Xu, Kehai Chen, Tiejun Zhao.
    👉 Method: Represent 3 types of paths for each relation pair, the paths including: intra-sentence reasoning path, logical reasoning path, and coreference reasoning path.

  6. ACL 2021 Findings. MRN: A Locally and Globally Mention-Based Reasoning Network for Document-Level Relation Extraction. Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren and Donghong Ji.

  7. EACL 2021. An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning. Markus Eberts, Adrian Ulges.
    👉 Method: a model for JOINT mention detection and doc RE: finetune BERT for 4 tasks: entity mention localization, coreference resolution, entity classification and relation classification

  8. IJCAI 2021. Document-level Relation Extraction as Semantic Segmentation, code.
    👉 Method: Capturing the corelation between relations by U-net, which is to enlarge the reciptove field

  9. AAAI 2021. Document-Level Relation Extraction with Reconstruction. Wang Xu, Kehai Chen, Tiejun Zhao.
    👉 Method: build a graph like EoG, use LSTM to calculate the probability of "inference meta path", then maximize the probability of relationed entity pairs using BCE

  10. AAAI 2021. Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. Wenxuan Zhou, Kevin Huang, Tengyu Ma, Jing Huang.
    👉 Method: improve BERT with marker, log-sum-exp pooling, group bilinear + adaptive threshold class + directly use transformer's attn matrix to aggregation words into doc representation

  11. AAAI 2021. Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction. Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, Zhendong Mao.
    👉 Method: add a structured attentive bias when calculating attention scores, to enable BERT attend more to coreference tokens

  12. AAAI 2021. Multi-view Inference for Relation Extraction with Uncertain Knowledge. Bo Li, Wei Ye, Canming Huang, Shikun Zhang.
    👉 Method: use KG concept knowledges: 3 attention aggregation(e2c, c2e, m2e) to get contextual and global entity pair representation, another attention aggregation to get sentence representation, concat for final classification

  13. PAKDD 2021. Densely Connected Graph Attention Network Based on Iterative Path Reasoning for Document-Level Relation Extraction. Hongya Zhang, Zhen Huang, Zhenzhen Li, Dongsheng Li, and Feng Liu.
    👉 Method: DCGAT for structural representation + inference same as EoG

  14. PAKDD 2021. SaGCN: Structure-Aware Graph Convolution Network for Document-Level Relation Extraction. Shuangji Yang, Taolin Zhang, Danning Su, Nan Hu, Wei Nong, and Xiaofeng He.
    👉 Method: a graph with explicit structure (depandency tree) and implicit structure (from hardkuma distribution) for structural representation, and a graph for inference

  15. ECML-PKDD 2021 NA-Aware Machine Reading Comprehension for Document-Level Relation Extraction. Zhenyu Zhang, Bowen Yu, Xiaobo Shu, and Tingwen Liu.
    👉 Method: using MRC-style encoder, aggragate mentions using directional attention flow, and add nota into labels

  16. ICASSP 2021. Multi-Granularity Heterogeneous Graph for Document-Level Relation Extraction.
    👉 Method: RGCN for graph reasoning, and entity-ware attn for final relation representation

  17. IEEE. Multi-Scale Feature and Metric Learning for Relation Extraction. Mi Zhang, Tieyun Qian.

  18. Knowledge-Based Systems. Document-level relation extraction using evidence reasoning on RST-GRAPH. Hailin Wang, Ke Qin, Guoming Lu, Jin Yin, Rufai Yusuf Zakari, Jim Wilson Owusu.

  19. Information Sciences. Document-level relation extraction with entity-selection attention. Changsen Yuan, Heyan Huang, Chong Feng, Ge Shi, and Xiaochi Wei.
    👉 Method: Select the essential sentence-level features and document-level features from the document by inter-sentenceattention and combine them with the document gating.

  20. Pattern Recognition Letters. Document-level Relation Extraction via Graph Transformer Networks and Temporal Convolutional Networks. Yong Shi, Yang Xiao, Pei Quan, MingLong Lei, and Lingfeng Niu.
    👉 Method: Temporal convolutional networks as encoder, build graph with huristic rules, and graph transformer networks for path gengration

  21. arXiv 2021. SAIS: Supervising and Augmenting Intermediate Steps for Document-Level Relation Extraction. Yuxin Xiao, Zecheng Zhang, Yuning Mao, Carl Yang, Jiawei Han.

  22. arXiv 2021. Eider: Evidence-enhanced Document-level Relation Extraction. Yiqing Xie, Jiaming Shen, Sha Li, Yuning Mao, Jiawei Han.
    👉 Method: Predict evidence sentences to construct pseudo document, and use a blend layer to combine the predictions of origin/pseudo document.

  23. arXiv 2021. BERT-GT: Cross-sentence n-ary relation extraction with BERT and Graph Transformer. Po-Ting Lai, Zhiyong Lu.
    👉 Method: densely connect Graph Transformer(neighbor attention) & Transformer to improve BERT

  24. arXiv 2021. MrGCN: Mirror Graph Convolution Network for Relation Extraction with Long-Term Dependencies. Xiao Guo, I-Hung Hsu, Wael AbdAlmageed, Premkumar Natarajan, Nanyun Peng.
    👉 Method: pooling-unpooling after GCN layers to enlarge receptive field (like u-net)

  25. arXiv 2021. Mention-centered Graph Neural Network for Document-level Relation Extraction. Jiaxin Pan, Min Peng, Yiyan Zhang.
    👉 Method: introduce mention-mention edge weight when building a graph

2020

  1. ACL 2020. Reasoning with Latent Structure Refinement for Document-Level Relation Extraction. Guoshun Nan, Zhijiang Guo, Ivan Sekulić, Wei Lu.
    👉 Method: Latent Structure + DCGCN

  2. EMNLP 2020. Double Graph Based Reasoning for Document-level Relation Extraction. Shuang Zeng, Runxin Xu, Baobao Chang, Lei Li.
    👉 Method: Mention graph(mention + doc node) + entity graph (2-hot at most)

  3. EMNLP 2020. Global-to-Local Neural Networks for Document-Level Relation Extraction. Difeng Wang, Wei Hu, Ermei Cao, Weijian Sun.
    👉 Method: EoG + R-GCN + entity-pair attention

  4. EMNLP 2020. Denoising Relation Extraction from Document-level Distant Supervision. Chaojun Xiao, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, Leyu Lin.
    👉 Method: using DS data to pre-train model for DocRE with 3 tasks: Mention-Entity Matching, Relation Detection, Relational Fact Alignment

  5. COLING 2020. Graph Enhanced Dual Attention Network for Document-Level Relation Extraction. Bo Li, Wei Ye, Zhonghao Sheng, Rui Xie, Xiangyu Xi, Shikun Zhang.
    👉 Method: bi-directional attn between sentence & relation instance + attn duality + support evidence guide

  6. COLING 2020. Global Context-enhanced Graph Convolutional Networks for Document-level Relation Extraction. Huiwei Zhou, Yibin Xu, Zhe Liu, Weihong Yao, Chengkun Lang, Haibin Jiang.
    👉 Method: entity graph with attn gate & attn adj matrix for entity representation + entity graph with multi-head attn as adj matrix for reasoning + dense-node&edge-GCN

  7. COLING 2020. Document-level Relation Extraction with Dual-tier Heterogeneous Graph. Zhenyu Zhang, Bowen Yu, Xiaobo Shu, Tingwen Liu, Hengzhu Tang, Yubin Wang and Li Guo.
    👉 Method: strcuct modeling graph + relation reasoning graph + weighted RGCN

  8. PAKDD 2020. HIN: Hierarchical Inference Network for Document-Level Relation Extraction. Hengzhu Tang, Yanan Cao, Zhenyu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, Pengfei Yin.
    👉 Method: Hierarchical (entity & document level) inference representation (using LSTMs, attention and all kinds of "concat") for an entity pair's representation

  9. ICKG 2020. Improving Document-level Relation Extraction via Contextualizing Mention Representations andWeighting Mention Pairs, code. Ping Jiang, Xian-Ling Mao, Binbin Bian and Heyan Huang.

  10. arXiv 2020. Coarse-to-Fine Entity Representations for Document-level Relation Extraction. Damai Dai, Jing Ren, Shuang Zeng, Baobao Chang, Zhifang Sui.
    👉 Method: a word graph for coarse representation, Bi-GRU for path encoding, and an attention aggregator

  11. arXiv 2020. Entity and Evidence Guided Relation Extraction for DocRED. Kevin Huang, Guangtao Wang, Tengyu Ma, Jing Huang.
    👉 Method: convert a doc into N {head entity; doc} samples for BERT, bilinear for RE, and double bilinear for evidence prediction as an auxiliary task

  12. IEEE Access 2020. A Novel Document-Level Relation Extraction Method Based on BERT and Entity Information. Xiaoyu Han and Lei Wang.
    👉 Method: using marker to wrap mention as {[entity type] mention [entity X]}, then using BERT + bilinear

2019

  1. ACL 2019. Inter-sentence Relation Extraction with Document-level Graph Convolutional Neural Network. Sunil Kumar Sahu, Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou.
    👉 Method: GCN + bi-affine classification

  2. ACL 2019. Attention Guided Graph Convolutional Networks for Relation Extraction. Zhijiang Guo, Yan Zhang, Wei Lu.
    👉 Method: multi-head attention to get multi graph with different adj matrix + dense GCN

  3. EMNLP 2019. Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs. Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou.
    👉 Method: Edge-oriented graph + Iterative Inference Mechanism

  4. NAACL 2019. Document-Level N-ary Relation Extraction with Multiscale Representation Learning. Robin Jia, Cliff Wong, Hoifung Poon.
    👉 Method: multioscale representation of mention and entity: lstm for paragraph-level mention representation, log-sum-exp pooling for entity representation

  5. arXiv 2019. Fine-tune Bert for DocRED with Two-step Process. Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, William Wang.
    👉 Method: BERT + bilinear, 2 step training: binary classification for relation detection, then multi classification for relation type

2018

  1. EMNLP 2018. N-ary Relation Extraction using Graph-State LSTM. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea.

  2. NAACL 2018. Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction. Patrick Verga, Emma Strubell, and Andrew McCallum.
    👉 Method: Transformers + CNN as encoder, head MLP and tail MLP for position-specific representations, bilinear + log-sum-exp for entity pair representation

2017

  1. TACL 2017. Cross-Sentence N-ary Relation Extraction with Graph LSTMs. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih.

Related Papers

The task of those papers are somewhat relevant to Doc RE (i.e. cross-sentence RE, GCN for RE, reasoning for MRC, etc.), of much value as well

2021

  1. ACL 2021. Injecting Knowledge Base Information into End-to-End Joint Entity and Relation Extraction and Coreference Resolution.
    👉 task: joint information extraction (used DocRED dataset)
    👉 Method: inject KB-text and KB-graph information into the model, using span representation to do NER, coreference, and RE

  2. EACL 2021. Two Training Strategies for Improving Relation Extraction over Universal Graph. Qin Dai, Naoya Inoue, Ryo Takahashi and Kentaro Inui.
    👉 Task: DSRE
    👉 Method: merge text into KG, and improve the "select path" stage with path type (textual, hybrid, KG paths) adaptive pretraining & complexity ranking guided attention

2019

  1. ACL 2019. Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs. Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, and Bowen Zhou.

  2. ACL 2019. Graph Neural Networks with Generated Parameters for Relation Extraction. Hao Zhu, Yankai Lin, Zhiyuan Liu, Jie Fu, Tat-seng Chua, and Maosong Sun.

  3. EMNLP 2019. KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren.

  4. NAACL 2019. Question Answering by Reasoning Across Documents with Graph Convolutional Networks. Nicola De Cao, Wilker Aziz, Ivan Titov.

  5. NAACL 2019. BAG: Bi-directional Attention Entity Graph Convolutional Network for Multi-hop Reasoning Question Answering. Yu Cao, Meng Fang, and Dacheng Tao.

  6. NAACL 2019. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen.
    👉 Task: RE
    👉 Method: Using KG knowledges to improve the performance of long-tail instances in RE. Using KG embeddings and GCN to learn relational knowledge, then attention aggregation to get final relation representation

  7. AAAI 2019. Neural Relation Extraction within and across Sentence Boundaries. Pankaj Gupta, Subburam Rajaram, Bernt Andrassy, Hinrich Schutze, Thomas Runkler.
    👉 Task: cross sentence RE
    👉 Method: Using RNN to model the dependancy subtree

2018

  1. EMNLP 2018. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction. Yuhao Zhang, Peng Qi, and Christopher D Manning.

  2. Journal of Biomedical Informatics 2018. An effective neural model extracting document level chemical-induced disease relations from biomedical literature. Wei Zheng, Hongfei Lin, Zhiheng Li, Xiaoxia Liu, Zhengguang Li, Bo Xu, Yijia Zhang, Zhihao Yang, and Jian Wang.

  3. NeurIPS 2018. Recurrent Relational Networks. Rasmus Palm, Ulrich Paquet, Ole Winther.

  4. ACL 2018. A Walk-based Model on Entity Graphs for Relation Extraction. Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou.

2017

  1. ACL 2017. Distant Supervision for Relation Extraction beyond the Sentence Boundary. Chris Quirk and Hoifung Poon.

  2. EMNLP 2017. Incorporating relation paths in neural relation extraction. Wenyuan Zeng, Yankai Lin, Zhiyuan Liu, and Maosong Sun.

About

a paper reading list on Document level Relation Extraction

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published