no code implementations • dialdoc (ACL) 2022 • Yiwei Jiang, Amir Hadifar, Johannes Deleu, Thomas Demeester, Chris Develder
Further, error analysis reveals two major failure cases, to be addressed in future work: (i) in case of topic shift within the dialog, retrieval often fails to select the correct grounding document(s), and (ii) generation sometimes fails to use the correctly retrieved grounding passage.
1 code implementation • COLING (WNUT) 2022 • Sofie Labat, Amir Hadifar, Thomas Demeester, Veronique Hoste
The ability to track fine-grained emotions in customer service dialogues has many real-world applications, but has not been studied extensively.
1 code implementation • 25 Oct 2022 • Semere Kiros Bitew, Amir Hadifar, Lucas Sterckx, Johannes Deleu, Chris Develder, Thomas Demeester
This paper studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors.
no code implementations • 12 Oct 2022 • Amir Hadifar, Semere Kiros Bitew, Johannes Deleu, Chris Develder, Thomas Demeester
Thus, our versatile dataset can be used for both question and distractor generation, as well as to explore new challenges such as question format conversion.
1 code implementation • NAACL 2021 • Amir Hadifar, Sofie Labat, Véronique Hoste, Chris Develder, Thomas Demeester
In online domain-specific customer service applications, many companies struggle to deploy advanced NLP models successfully, due to the limited availability of and noise in their datasets.
1 code implementation • 14 Jan 2020 • Amir Hadifar, Johannes Deleu, Chris Develder, Thomas Demeester
In this paper, we present a new method for \emph{dynamic sparseness}, whereby part of the computations are omitted dynamically, based on the input.
1 code implementation • WS 2019 • Amir Hadifar, Lucas Sterckx, Thomas Demeester, Chris Develder
Short text clustering is a challenging problem when adopting traditional bag-of-words or TF-IDF representations, since these lead to sparse vector representations of the short texts.
Ranked #2 on
Short Text Clustering
on Searchsnippets