# Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction

Current state-of-the-art machine translation systems are based on encoder-decoder architectures, that first encode the input sequence, and then generate an output sequence based on the input encoding.

466

# NLP-Cube: End-to-End Raw Text Processing With Neural Networks

We introduce NLP-Cube: an end-to-end Natural Language Processing framework, evaluated in CoNLL{'}s {}Multilingual Parsing from Raw Text to Universal Dependencies 2018{''} Shared Task.

289

# Simple Unsupervised Keyphrase Extraction using Sentence Embeddings

EmbedRank achieves higher F-scores than graph-based state of the art systems on standard datasets and is suitable for real-time processing of large amounts of Web data.

225

Entity Linking (EL) is an essential task for semantic text understanding and information extraction.

SOTA for Entity Linking on AIDA-CoNLL (Micro-F1 metric )

143

# An improved neural network model for joint POS tagging and dependency parsing

We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing.

130

# The Lifted Matrix-Space Model for Semantic Composition

Tree-structured neural network architectures for sentence encoding draw inspiration from the approach to semantic composition generally seen in formal linguistics, and have shown empirical improvements over comparable sequence models by doing so.

95

# Improving Response Selection in Multi-Turn Dialogue Systems by Incorporating Domain Knowledge

Building systems that can communicate with humans is a core problem in Artificial Intelligence.

46

# Evolutionary Data Measures: Understanding the Difficulty of Text Classification Tasks

In this paper we analyse exactly which characteristics of a dataset best determine how difficult that dataset is for the task of text classification.

44

# Semi-Supervised Neural System for Tagging, Parsing and Lematization

This paper describes the ICS PAS system which took part in CoNLL 2018 shared task on Multilingual Parsing from Raw Text to Universal Dependencies.

28

# Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge

They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation.

24