Contextualized Word Embeddings in a Neural Open Information Extraction Model

Published in International Conference on Applications of Natural Language to Information Systems, 2019

Recommended citation: Injy Sarhan and Marco R. Spruit. "Contextualized Word Embeddings in a Neural Open Information Extraction Model.", International Conference on Applications of Natural Language to Information Systems. Springer England. (2019, June). https://link.springer.com/chapter/10.1007/978-3-030-23281-8_31

Open Information Extraction (OIE) is a challenging task of extracting relation tuples from an unstructured corpus. While several OIE algorithms have been developed in the past decade, only few employ deep learning techniques. In this paper, a novel OIE neural model that leverages Recurrent Neural Networks (RNN) using Gated Recurrent Units (GRUs) is presented. Moreover, we integrate the innovative contextual word embeddings into our OIE model, which further enhances the performance. The results demonstrate that our proposed neural OIE model outperforms the existing state-of-art on two datasets.

Download paper here.