Search Results for author: Thomas Demeester

Found 36 papers, 21 papers with code

Towards Consistent Document-level Entity Linking: Joint Models for Entity Linking and Coreference Resolution

no code implementations30 Aug 2021 Klim Zaporojets, Johannes Deleu, Thomas Demeester, Chris Develder

We consider the task of document-level entity linking (EL), where it is important to make consistent decisions for entity mentions over the full document jointly.

Coreference Resolution Document-level +2

Injecting Knowledge Base Information into End-to-End Joint Entity and Relation Extraction and Coreference Resolution

1 code implementation5 Jul 2021 Severine Verlinden, Klim Zaporojets, Johannes Deleu, Thomas Demeester, Chris Develder

The used KB entity representations are learned from either (i) hyperlinked text documents (Wikipedia), or (ii) a knowledge graph (Wikidata), and appear complementary in raising IE performance.

Coreference Resolution Entity Linking +2

A Million Tweets Are Worth a Few Points: Tuning Transformers for Customer Service Tasks

1 code implementation NAACL 2021 Amir Hadifar, Sofie Labat, Véronique Hoste, Chris Develder, Thomas Demeester

In online domain-specific customer service applications, many companies struggle to deploy advanced NLP models successfully, due to the limited availability of and noise in their datasets.

DWIE: an entity-centric dataset for multi-task document-level information extraction

1 code implementation26 Sep 2020 Klim Zaporojets, Johannes Deleu, Chris Develder, Thomas Demeester

Second, the document-level multi-task annotations require the models to transfer information between entity mentions located in different parts of the document, as well as between different tasks, in a joint learning setting.

 Ranked #1 on Coreference Resolution on DWIE (Avg. F1 metric)

Coreference Resolution Document-level +5

Solving Arithmetic Word Problems by Scoring Equations with Recursive Neural Networks

no code implementations11 Sep 2020 Klim Zaporojets, Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder

Recent works use automatic extraction and ranking of candidate solution equations providing the answer to arithmetic word problems.

Block-wise Dynamic Sparseness

1 code implementation14 Jan 2020 Amir Hadifar, Johannes Deleu, Chris Develder, Thomas Demeester

In this paper, we present a new method for \emph{dynamic sparseness}, whereby part of the computations are omitted dynamically, based on the input.

Language Modelling

System Identification with Time-Aware Neural Sequence Models

1 code implementation21 Nov 2019 Thomas Demeester

Established recurrent neural networks are well-suited to solve a wide variety of prediction tasks involving discrete sequences.

A Self-Training Approach for Short Text Clustering

1 code implementation WS 2019 Amir Hadifar, Lucas Sterckx, Thomas Demeester, Chris Develder

Short text clustering is a challenging problem when adopting traditional bag-of-words or TF-IDF representations, since these lead to sparse vector representations of the short texts.

Deep Clustering Sentence Embedding +1

Neural Probabilistic Logic Programming in DeepProbLog

no code implementations NeurIPS 2018 Robin Manhaeve, Sebastijan Dumančić, Angelika Kimmig, Thomas Demeester, Luc De Raedt

We introduce DeepProbLog, a neural probabilistic logic programming language that incorporates deep learning by means of neural predicates.

Program induction

Sub-event detection from Twitter streams as a sequence labeling problem

1 code implementation NAACL 2019 Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder

This paper introduces improved methods for sub-event detection in social media streams, by applying neural sequence models not only on the level of individual posts, but also directly on the stream level.

Event Detection

Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?

1 code implementation EMNLP 2018 Fréderic Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, Thomas Demeester

In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations.

Morphological Tagging

Predefined Sparseness in Recurrent Sequence Models

1 code implementation CONLL 2018 Thomas Demeester, Johannes Deleu, Fréderic Godin, Chris Develder

Inducing sparseness while training neural networks has been shown to yield models with a lower memory footprint but similar effectiveness to dense models.

Language Modelling Word Embeddings

Adversarial training for multi-context joint entity and relation extraction

1 code implementation EMNLP 2018 Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder

Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data.

Joint Entity and Relation Extraction

Jack the Reader -- A Machine Reading Framework

1 code implementation ACL 2018 Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rockt{\"a}schel, Matko Bo{\v{s}}njak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Information Retrieval Link Prediction +4

Prior Attention for Style-aware Sequence-to-Sequence Models

no code implementations25 Jun 2018 Lucas Sterckx, Johannes Deleu, Chris Develder, Thomas Demeester

We extend sequence-to-sequence models with the possibility to control the characteristics or style of the generated output, via attention that is generated a priori (before decoding) from a latent code vector.

Lexical Simplification

Jack the Reader - A Machine Reading Framework

2 code implementations20 Jun 2018 Dirk Weissenborn, Pasquale Minervini, Tim Dettmers, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bošnjak, Jeff Mitchell, Thomas Demeester, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Link Prediction Natural Language Inference +3

DeepProbLog: Neural Probabilistic Logic Programming

2 code implementations NeurIPS 2018 Robin Manhaeve, Sebastijan Dumančić, Angelika Kimmig, Thomas Demeester, Luc De Raedt

We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates.

Program induction

Joint entity recognition and relation extraction as a multi-head selection problem

6 code implementations20 Apr 2018 Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder

State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers.

POS

Character-level Recurrent Neural Networks in Practice: Comparing Training and Sampling Schemes

2 code implementations2 Jan 2018 Cedric De Boom, Thomas Demeester, Bart Dhoedt

Recurrent neural networks are nowadays successfully used in an abundance of applications, going from text, speech and image processing to recommender systems.

Recommendation Systems

An attentive neural architecture for joint segmentation and parsing and its application to real estate ads

1 code implementation27 Sep 2017 Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder

In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by (i) avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and (ii) exploiting the interactions between the subtasks.

Dependency Parsing

Adversarial Sets for Regularising Neural Link Predictors

1 code implementation24 Jul 2017 Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, Sebastian Riedel

The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples.

Link Prediction Relational Reasoning

Reconstructing the house from the ad: Structured prediction on real estate classifieds

1 code implementation EACL 2017 Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder

In this paper, we address the (to the best of our knowledge) new problem of extracting a structured description of real estate properties from their natural language descriptions in classifieds.

Dependency Parsing Named Entity Recognition +1

Representation learning for very short texts using weighted word embedding aggregation

1 code implementation2 Jul 2016 Cedric De Boom, Steven Van Canneyt, Thomas Demeester, Bart Dhoedt

Traditional textual representations, such as tf-idf, have difficulty grasping the semantic meaning of such texts, which is important in applications such as event detection, opinion mining, news recommendation, etc.

Event Detection News Recommendation +4

Lifted Rule Injection for Relation Embeddings

no code implementations EMNLP 2016 Thomas Demeester, Tim Rocktäschel, Sebastian Riedel

Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks.

Representation Learning

Efficiency Evaluation of Character-level RNN Training Schedules

1 code implementation9 May 2016 Cedric De Boom, Sam Leroux, Steven Bohez, Pieter Simoens, Thomas Demeester, Bart Dhoedt

We present four training and prediction schedules from the same character-level recurrent neural network.

Learning Semantic Similarity for Very Short Texts

no code implementations2 Dec 2015 Cedric De Boom, Steven Van Canneyt, Steven Bohez, Thomas Demeester, Bart Dhoedt

We therefore investigated several text representations as a combination of word embeddings in the context of semantic pair matching.

Information Retrieval Semantic Similarity +2

Knowledge Base Population using Semantic Label Propagation

no code implementations19 Nov 2015 Lucas Sterckx, Thomas Demeester, Johannes Deleu, Chris Develder

We propose to combine distant supervision with minimal manual supervision in a technique called feature labeling, to eliminate noise from the large and noisy initial training set, resulting in a significant increase of precision.

Knowledge Base Population

Cannot find the paper you are looking for? You can Submit a new open access paper.