no code implementations • EMNLP (intexsempar) 2020 • Priyanka Sen, Emine Yilmaz
Collecting training data for semantic parsing is a time-consuming and expensive task.
1 code implementation • ICML 2020 • Qiang Zhang, Aldo Lipani, Omer Kirnap, Emine Yilmaz
A common method to do this is Hawkes processes.
1 code implementation • Findings (EMNLP) 2021 • Jarana Manotumruksa, Jeff Dalton, Edgar Meij, Emine Yilmaz
While state-of-the-art Dialogue State Tracking (DST) models show promising results, all of them rely on a traditional cross-entropy loss function during the training process, which may not be optimal for improving the joint goal accuracy.
no code implementations • 19 Jun 2022 • Peter Hayes, Mingtian Zhang, Raza Habib, Jordan Burgess, Emine Yilmaz, David Barber
We introduce a label model that can learn to aggregate weak supervision sources differently for different datapoints and takes into consideration the performance of the end-model during training.
1 code implementation • 17 May 2022 • Rikaz Rameez, Hossein A. Rahmani, Emine Yilmaz
We collect a dataset of 330k tweets to train ViralBERT and validate the efficacy of our model using baselines from current studies in this field.
no code implementations • ACL 2022 • Yue Feng, Aldo Lipani, Fanghua Ye, Qiang Zhang, Emine Yilmaz
Existing approaches that have considered such relations generally fall short in: (1) fusing prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly, and (2) generalizing to unseen domains.
Dialogue State Tracking
Multi-domain Dialogue State Tracking
+1
1 code implementation • Findings (ACL) 2022 • Fanghua Ye, Yue Feng, Emine Yilmaz
In this paper, instead of improving the annotation quality further, we propose a general framework, named ASSIST (lAbel noiSe-robuSt dIalogue State Tracking), to train DST models robustly from noisy labels.
no code implementations • 10 Jan 2022 • Maria Perez-Ortiz, Sahan Bulathwela, Claire Dormann, Meghana Verma, Stefan Kreitmayer, Richard Noss, John Shawe-Taylor, Yvonne Rogers, Emine Yilmaz
The user questionnaire revealed that participants found the Content Flow Bar helpful and enjoyable for finding relevant information in videos.
no code implementations • 8 Dec 2021 • Sahan Bulathwela, María Pérez-Ortiz, Emine Yilmaz, John Shawe-Taylor
In informational recommenders, many challenges arise from the need to handle the semantic and hierarchical structure between knowledge areas.
no code implementations • NeurIPS 2021 • Qiang Zhang, Jinyuan Fang, Zaiqiao Meng, Shangsong Liang, Emine Yilmaz
Conventional meta-learning considers a set of tasks from a stationary distribution.
no code implementations • 17 Oct 2021 • Aldo Lipani, Florina Piroi, Emine Yilmaz
Information availability affects people's behavior and perception of the world.
1 code implementation • ICLR 2022 • Fangyu Liu, Yunlong Jiao, Jordan Massiah, Emine Yilmaz, Serhii Havrylov
Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders.
Ranked #1 on
Semantic Textual Similarity
on STS15
no code implementations • 24 Sep 2021 • Emine Yilmaz, Peter Hayes, Raza Habib, Jordan Burgess, David Barber
Labelling data is a major practical bottleneck in training and testing classifiers.
1 code implementation • 3 Sep 2021 • Sahan Bulathwela, Maria Perez-Ortiz, Erik Novak, Emine Yilmaz, John Shawe-Taylor
One of the main challenges in advancing this research direction is the scarcity of large, publicly available datasets.
no code implementations • 11 Aug 2021 • Ömer Kırnap, Fernando Diaz, Asia Biega, Michael Ekstrand, Ben Carterette, Emine Yilmaz
There is increasing attention to evaluating the fairness of search system ranking decisions.
no code implementations • 9 May 2021 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Jimmy Lin
Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public leaderboard such as MS MARCO, are intended to encourage research and track our progress, addressing big questions in our field.
no code implementations • 19 Apr 2021 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M. Voorhees, Ian Soboroff
The TREC Deep Learning (DL) Track studies ad hoc search in the large data regime, meaning that a large set of human-labeled training data is available.
1 code implementation • 1 Apr 2021 • Fanghua Ye, Jarana Manotumruksa, Emine Yilmaz
This work introduces MultiWOZ 2. 4, in which we refine all annotations in the validation set and test set on top of MultiWOZ 2. 1.
no code implementations • 25 Feb 2021 • Jimmy Lin, Daniel Campos, Nick Craswell, Bhaskar Mitra, Emine Yilmaz
Leaderboards are a ubiquitous part of modern research in applied machine learning.
1 code implementation • 15 Feb 2021 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos
This is the second year of the TREC Deep Learning Track, with the goal of studying ad hoc ranking in the large training data regime.
1 code implementation • 22 Jan 2021 • Fanghua Ye, Jarana Manotumruksa, Qiang Zhang, Shenghui Li, Emine Yilmaz
Then a stacked slot self-attention is applied on these features to learn the correlations among slots.
1 code implementation • 2 Nov 2020 • Sahan Bulathwela, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
This paper introduces VLEngagement, a novel dataset that consists of content-based and video-specific features extracted from publicly available scientific video lectures and several metrics related to user engagement.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Fanghua Ye, Jarana Manotumruksa, Emine Yilmaz
Semantic hashing is a powerful paradigm for representing texts as compact binary hash codes.
no code implementations • 9 Jun 2020 • Nick Craswell, Daniel Campos, Bhaskar Mitra, Emine Yilmaz, Bodo Billerbeck
Users of Web search engines reveal their information needs through queries and clicks, making click logs a useful asset for information retrieval.
1 code implementation • 31 May 2020 • Sahan Bulathwela, María Pérez-Ortiz, Aldo Lipani, Emine Yilmaz, John Shawe-Taylor
The explosion of Open Educational Resources (OERs) in the recent years creates the demand for scalable, automatic approaches to process and evaluate OERs, with the end goal of identifying and recommending the most suitable educational materials for learners.
no code implementations • 28 Apr 2020 • Emine Yilmaz, Nick Craswell, Bhaskar Mitra, Daniel Campos
As deep learning based models are increasingly being used for information retrieval (IR), a major challenge is to ensure the availability of test collections for measuring their quality.
2 code implementations • 17 Mar 2020 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M. Voorhees
The Deep Learning Track is a new track for TREC 2019, with the goal of studying ad hoc ranking in a large data regime.
1 code implementation • 3 Dec 2019 • Sahan Bulathwela, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
One of the most ambitious use cases of computer-assisted learning is to build a recommendation system for lifelong learning.
1 code implementation • 21 Nov 2019 • Sahan Bulathwela, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
The recent advances in computer-assisted learning systems and the availability of open educational resources today promise a pathway to providing cost-efficient, high-quality education to large masses of learners.
1 code implementation • 12 Oct 2019 • Niklas Stoehr, Emine Yilmaz, Marc Brockschmidt, Jan Stuehmer
While a wide range of interpretable generative procedures for graphs exist, matching observed graph topologies with such procedures and choices for its parameters remains an open problem.
1 code implementation • 17 Jul 2019 • Qiang Zhang, Aldo Lipani, Omer Kirnap, Emine Yilmaz
The proposed method adapts self-attention to fit the intensity function of Hawkes processes.
no code implementations • 8 Jul 2019 • Bhaskar Mitra, Corby Rosset, David Hawking, Nick Craswell, Fernando Diaz, Emine Yilmaz
Deep neural IR models, in contrast, compare the whole query to the document and are, therefore, typically employed only for late stage re-ranking.
no code implementations • 30 Dec 2018 • Qiang Zhang, Shangsong Liang, Emine Yilmaz
This paper proposes a variational self-attention model (VSAM) that employs variational inference to derive self-attention.
no code implementations • 28 Nov 2018 • Sebastin Santy, Wazeer Zulfikar, Rishabh Mehrotra, Emine Yilmaz
We consider the problem of understanding real world tasks depicted in visual images.
no code implementations • 6 Jun 2017 • Rishabh Mehrotra, Emine Yilmaz
As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions, for better recommendations, for satisfaction prediction, and for improved personalization in terms of tasks.