Selectively connected self-attentions for semantic role labeling

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Semantic role labeling is an effective approach to understand underlying meanings associated with word relationships in natural language sentences. Recent studies using deep neural networks, specifically, recurrent neural networks, have significantly improved traditional shallow models. However, due to the limitation of recurrent updates, they require long training time over a large data set. Moreover, they could not capture the hierarchical structures of languages. We propose a novel deep neural model, providing selective connections among attentive representations, which remove the recurrent updates, for semantic role labeling. Experimental results show that our model performs better in accuracy compared to the state-of-the-art studies. Our model achieves 86.6 F1 scores and 83.6 F1 scores on the CoNLL 2005 and CoNLL 2012 shared tasks, respectively. The accuracy gains are improved by capturing the hierarchical information using the connection module. Moreover, we show that our model can be parallelized to avoid the repetitive updates of the model. As a result, our model reduces the training time by 62 percentages from the baseline.

Original languageEnglish
Article number1716
JournalApplied Sciences (Switzerland)
Volume9
Issue number8
DOIs
StatePublished - 1 Apr 2019

Keywords

  • Attention mechanism
  • Selective connection
  • Semantic role labeling

Fingerprint

Dive into the research topics of 'Selectively connected self-attentions for semantic role labeling'. Together they form a unique fingerprint.

Cite this