no code implementations • CRAC (ACL) 2021 • Semere Kiros Bitew, Johannes Deleu, Chris Develder, Thomas Demeester
Large annotated corpora for coreference resolution are available for few languages.
no code implementations • dialdoc (ACL) 2022 • Yiwei Jiang, Amir Hadifar, Johannes Deleu, Thomas Demeester, Chris Develder
Further, error analysis reveals two major failure cases, to be addressed in future work: (i) in case of topic shift within the dialog, retrieval often fails to select the correct grounding document(s), and (ii) generation sometimes fails to use the correctly retrieved grounding passage.
1 code implementation • 2 Jun 2023 • Semere Kiros Bitew, Johannes Deleu, A. Seza Dogruöz, Chris Develder, Thomas Demeester
Since performing exercises (including, e. g., practice tests) forms a crucial component of learning, and creating such exercises requires non-trivial effort from the teacher.
1 code implementation • 22 May 2023 • Karel D'Oosterlinck, François Remy, Johannes Deleu, Thomas Demeester, Chris Develder, Klim Zaporojets, Aneiss Ghodsi, Simon Ellershaw, Jack Collins, Christopher Potts
We introduce BioDEX, a large-scale resource for Biomedical adverse Drug Event Extraction, rooted in the historical output of drug safety reporting in the U. S. BioDEX consists of 65k abstracts and 19k full-text biomedical papers with 256k associated document-level safety reports created by medical experts.
1 code implementation • 5 Feb 2023 • Klim Zaporojets, Lucie-Aimee Kaffee, Johannes Deleu, Thomas Demeester, Chris Develder, Isabelle Augenstein
For that study, we introduce TempEL, an entity linking dataset that consists of time-stratified English Wikipedia snapshots from 2013 to 2022, from which we collect both anchor mentions of entities, and these target entities' descriptions.
1 code implementation • 25 Oct 2022 • Semere Kiros Bitew, Amir Hadifar, Lucas Sterckx, Johannes Deleu, Chris Develder, Thomas Demeester
This paper studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors.
no code implementations • 12 Oct 2022 • Amir Hadifar, Semere Kiros Bitew, Johannes Deleu, Chris Develder, Thomas Demeester
Thus, our versatile dataset can be used for both question and distractor generation, as well as to explore new challenges such as question format conversion.
1 code implementation • 13 Sep 2022 • Jens-Joris Decorte, Jeroen Van Hautte, Johannes Deleu, Chris Develder, Thomas Demeester
We introduce a manually annotated evaluation benchmark for skill extraction based on the ESCO taxonomy, on which we validate our models.
1 code implementation • 17 Jun 2022 • Yiwei Jiang, Klim Zaporojets, Johannes Deleu, Thomas Demeester, Chris Develder
This work presents a new dialog dataset, CookDial, that facilitates research on task-oriented dialog systems with procedural knowledge understanding.
1 code implementation • ACL 2022 • Klim Zaporojets, Johannes Deleu, Yiwei Jiang, Thomas Demeester, Chris Develder
We consider the task of document-level entity linking (EL), where it is important to make consistent decisions for entity mentions over the full document jointly.
1 code implementation • Findings (ACL) 2021 • Severine Verlinden, Klim Zaporojets, Johannes Deleu, Thomas Demeester, Chris Develder
The used KB entity representations are learned from either (i) hyperlinked text documents (Wikipedia), or (ii) a knowledge graph (Wikidata), and appear complementary in raising IE performance.
Ranked #1 on
Relation Extraction
on DWIE
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Yiwei Jiang, Klim Zaporojets, Johannes Deleu, Thomas Demeester, Chris Develder
We propose a newly annotated dataset for information extraction on recipes.
2 code implementations • 26 Sep 2020 • Klim Zaporojets, Johannes Deleu, Chris Develder, Thomas Demeester
Second, the document-level multi-task annotations require the models to transfer information between entity mentions located in different parts of the document, as well as between different tasks, in a joint learning setting.
Ranked #1 on
Coreference Resolution
on DWIE
(Avg. F1 metric)
no code implementations • 11 Sep 2020 • Klim Zaporojets, Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
Recent works use automatic extraction and ranking of candidate solution equations providing the answer to arithmetic word problems.
1 code implementation • 14 Jan 2020 • Amir Hadifar, Johannes Deleu, Chris Develder, Thomas Demeester
In this paper, we present a new method for \emph{dynamic sparseness}, whereby part of the computations are omitted dynamically, based on the input.
no code implementations • WS 2019 • Semere Kiros Bitew, Giannis Bekoulis, Johannes Deleu, Lucas Sterckx, Klim Zaporojets, Thomas Demeester, Chris Develder
This paper describes IDLab{'}s text classification systems submitted to Task A as part of the CLPsych 2019 shared task.
1 code implementation • NAACL 2019 • Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
This paper introduces improved methods for sub-event detection in social media streams, by applying neural sequence models not only on the level of individual posts, but also directly on the stream level.
no code implementations • 27 Sep 2018 • Nasrin Sadeghianpourhamami, Johannes Deleu, Chris Develder
In this paper, we propose a new Markov decision process (MDP) formulation in the RL framework, to jointly coordinate a set of EV charging stations.
1 code implementation • CONLL 2018 • Thomas Demeester, Johannes Deleu, Fréderic Godin, Chris Develder
Inducing sparseness while training neural networks has been shown to yield models with a lower memory footprint but similar effectiveness to dense models.
1 code implementation • EMNLP 2018 • Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data.
Ranked #7 on
Relation Extraction
on ACE 2004
no code implementations • 25 Jun 2018 • Lucas Sterckx, Johannes Deleu, Chris Develder, Thomas Demeester
We extend sequence-to-sequence models with the possibility to control the characteristics or style of the generated output, via attention that is generated a priori (before decoding) from a latent code vector.
no code implementations • WS 2018 • Klim Zaporojets, Lucas Sterckx, Johannes Deleu, Thomas Demeester, Chris Develder
This paper describes the IDLab system submitted to Task A of the CLPsych 2018 shared task.
6 code implementations • 20 Apr 2018 • Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers.
Ranked #7 on
Relation Extraction
on CoNLL04
1 code implementation • 27 Sep 2017 • Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by (i) avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and (ii) exploiting the interactions between the subtasks.
1 code implementation • EACL 2017 • Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
In this paper, we address the (to the best of our knowledge) new problem of extracting a structured description of real estate properties from their natural language descriptions in classifieds.
no code implementations • 19 Nov 2015 • Lucas Sterckx, Thomas Demeester, Johannes Deleu, Chris Develder
We propose to combine distant supervision with minimal manual supervision in a technique called feature labeling, to eliminate noise from the large and noisy initial training set, resulting in a significant increase of precision.