no code implementations • WS 2017 • Sebastian Brarda, Philip Yeres, Samuel R. Bowman
In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence in a way that takes into account not just how well that word matches a query, but how well surrounding words match.
8 code implementations • 25 Apr 2017 • Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Lawrence Jackel, Urs Muller
This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving.