Search Results for author: Jeska Buhmann

Found 6 papers, 3 papers with code

PersonalityChat: Conversation Distillation for Personalized Dialog Modeling with Facts and Traits

no code implementations14 Jan 2024 Ehsan Lotfi, Maxime De Bruyn, Jeska Buhmann, Walter Daelemans

The new wave of Large Language Models (LLM) has offered an efficient tool to curate sizeable conversational datasets.

Teach Me What to Say and I Will Learn What to Pick: Unsupervised Knowledge Selection Through Response Generation with Pretrained Generative Models

no code implementations EMNLP (NLP4ConvAI) 2021 Ehsan Lotfi, Maxime De Bruyn, Jeska Buhmann, Walter Daelemans

In this work we study the unsupervised selection abilities of pre-trained generative models (e. g. BART) and show that by adding a score-and-aggregate module between encoder and decoder, they are capable of learning to pick the proper knowledge through minimising the language modelling loss (i. e. without having access to knowledge labels).

Language Modelling Response Generation +1

ConveRT for FAQ Answering

1 code implementation2 Aug 2021 Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, Walter Daelemans

While powerful and efficient retrieval-based models exist for English, it is rarely the case for other languages for which the same amount of training data is not available.

Chatbot Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.