no code implementations • EMNLP 2021 • Ivan Vulić, Pei-Hao Su, Sam Coope, Daniela Gerz, Paweł Budzianowski, Iñigo Casanueva, Nikola Mrkšić, Tsung-Hsien Wen
Transformer-based language models (LMs) pretrained on large text collections are proven to store a wealth of semantic knowledge.
no code implementations • EMNLP 2021 • Daniela Gerz, Pei-Hao Su, Razvan Kusztos, Avishek Mondal, Michał Lis, Eshan Singhal, Nikola Mrkšić, Tsung-Hsien Wen, Ivan Vulić
We present a systematic study on multilingual and cross-lingual intent detection from spoken data.
1 code implementation • ACL 2020 • Sam Coope, Tyler Farghly, Daniela Gerz, Ivan Vulić, Matthew Henderson
We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a turn-based span extraction task.
1 code implementation • ACL 2020 • Daniela Gerz, Ivan Vulić, Marek Rei, Roi Reichart, Anna Korhonen
We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures.
5 code implementations • WS 2020 • Iñigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, Ivan Vulić
Building conversational systems in new domains and with added functionality requires resource-efficient models that work under low-data regimes (i. e., in few-shot setups).
no code implementations • IJCNLP 2019 • Matthew Henderson, Ivan Vulić, Iñigo Casanueva, Paweł Budzianowski, Daniela Gerz, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su
We present PolyResponse, a conversational search engine that supports task-oriented dialogue.
1 code implementation • ACL 2019 • Matthew Henderson, Ivan Vulić, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su
Despite their popularity in the chatbot literature, retrieval-based models have had modest impact on task-oriented dialogue systems, with the main obstacle to their application being the low-data regime of most task-oriented dialogue tasks.
no code implementations • NAACL 2019 • Ehsan Shareghi, Daniela Gerz, Ivan Vuli{\'c}, Anna Korhonen
In recent years neural language models (LMs) have set the state-of-the-art performance for several benchmarking datasets.
3 code implementations • WS 2019 • Matthew Henderson, Paweł Budzianowski, Iñigo Casanueva, Sam Coope, Daniela Gerz, Girish Kumar, Nikola Mrkšić, Georgios Spithourakis, Pei-Hao Su, Ivan Vulić, Tsung-Hsien Wen
Progress in Machine Learning is often driven by the availability of large datasets, and consistent evaluation metrics for comparing modeling approaches.
BIG-bench Machine Learning Conversational Response Selection +1
no code implementations • EMNLP 2018 • Daniela Gerz, Ivan Vuli{\'c}, Edoardo Maria Ponti, Roi Reichart, Anna Korhonen
A key challenge in cross-lingual NLP is developing general language-independent architectures that are equally applicable to any language.
1 code implementation • ACL 2018 • Marek Rei, Daniela Gerz, Ivan Vulić
Experiments show excellent performance on scoring graded lexical entailment, raising the state-of-the-art on the HyperLex dataset by approximately 25%.
no code implementations • TACL 2018 • Daniela Gerz, Ivan Vuli{\'c}, Edoardo Ponti, Jason Naradowsky, Roi Reichart, Anna Korhonen
Neural architectures are prominent in the construction of language models (LMs).
no code implementations • CL 2017 • Ivan Vulić, Daniela Gerz, Douwe Kiela, Felix Hill, Anna Korhonen
We introduce HyperLex - a dataset and evaluation resource that quantifies the extent of of the semantic category membership, that is, type-of relation also known as hyponymy-hypernymy or lexical entailment (LE) relation between 2, 616 concept pairs.
1 code implementation • EMNLP 2016 • Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, Anna Korhonen
Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research.