5 code implementations • EMNLP 2018 • Anoop Raveendra Katti, Christian Reisswig, Cordula Guder, Sebastian Brarda, Steffen Bickel, Johannes Höhne, Jean Baptiste Faddoul
We introduce a novel type of text representation that preserves the 2D layout of a document.
no code implementations • WS 2017 • Sebastian Brarda, Philip Yeres, Samuel R. Bowman
In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence in a way that takes into account not just how well that word matches a query, but how well surrounding words match.