no code implementations • 2 May 2022 • Hamed Zamani, Fernando Diaz, Mostafa Dehghani, Donald Metzler, Michael Bendersky
Although information access systems have long supported people in accomplishing a wide range of tasks, we propose broadening the scope of users of information access systems to include task-driven machines, such as machine learning models.
no code implementations • 25 Jan 2022 • Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, Marc Najork
In this work, we carefully select five datasets, including two in-domain datasets and three out-of-domain datasets with different levels of domain shift, and study the generalization of a deep model in a zero-shot setting.
no code implementations • 17 Dec 2021 • Nan Wang, Zhen Qin, Le Yan, Honglei Zhuang, Xuanhui Wang, Michael Bendersky, Marc Najork
We further demonstrate that the dominant neural MCC architecture can be formulated as a neural ranking framework with a specific set of design choices.
no code implementations • 30 Sep 2021 • Zhen Qin, Le Yan, Yi Tay, Honglei Zhuang, Xuanhui Wang, Michael Bendersky, Marc Najork
We explore a novel perspective of knowledge distillation (KD) for learning to rank (LTR), and introduce Self-Distilled neural Rankers (SDR), where student rankers are parameterized identically to their teachers.
no code implementations • 29 Sep 2021 • Nan Wang, Zhen Qin, Le Yan, Honglei Zhuang, Xuanhui Wang, Michael Bendersky, Marc Najork
We further demonstrate that the most popular MCC architecture in deep learning can be mathematically formulated as a LTR pipeline equivalently, with a specific set of choices in terms of ranking model architecture and loss function.
no code implementations • 11 Jun 2021 • Spurthi Amba Hombaiah, Tao Chen, Mingyang Zhang, Michael Bendersky, Marc Najork
To this end, we both explore two different vocabulary composition methods, as well as propose three sampling methods which help in efficient incremental training for BERT-like models.
no code implementations • 16 Apr 2021 • Te-Lin Wu, Cheng Li, Mingyang Zhang, Tao Chen, Spurthi Amba Hombaiah, Michael Bendersky
text, table, image) and propose a novel layout-aware multimodal hierarchical framework, LAMPreT, to model the blocks and the whole document.
no code implementations • 15 Apr 2021 • Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, Marc Najork
We investigate the privacy and utility implications of applying dx-privacy, a variant of Local Differential Privacy, to BERT fine-tuning in NLU applications.
2 code implementations • 2 Mar 2021 • Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, Marc Najork
First, WIT is the largest multimodal dataset by the number of image-text examples by 3x (at the time of writing).
Ranked #1 on
Text-Image Retrieval
on WIT
no code implementations • ICLR 2021 • Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, Marc Najork
We first validate this concern by showing that most recent neural LTR models are, by a large margin, inferior to the best publicly available Gradient Boosted Decision Trees (GBDT) in terms of their reported ranking accuracy on benchmark datasets.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiecao Chen, Liu Yang, Karthik Raman, Michael Bendersky, Jung-Jung Yeh, Yun Zhou, Marc Najork, Danyang Cai, Ehsan Emadzadeh
Pre-trained models like BERT (Devlin et al., 2018) have dominated NLP / IR applications such as single sentence classification, text pair classification, and question answering.
no code implementations • 2 Oct 2020 • Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, Marc Najork
A hybrid approach, which leverages both semantic (deep neural network-based) and lexical (keyword matching-based) retrieval models, is proposed.
no code implementations • 1 Oct 2020 • Michael Bendersky, Honglei Zhuang, Ji Ma, Shuguang Han, Keith Hall, Ryan Mcdonald
In this paper, we report the results of our participation in the TREC-COVID challenge.
no code implementations • 6 May 2020 • Honglei Zhuang, Xuanhui Wang, Michael Bendersky, Alexander Grushetsky, Yonghui Wu, Petr Mitrichev, Ethan Sterling, Nathan Bell, Walker Ravina, Hai Qian
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area.
1 code implementation • 26 Apr 2020 • Liu Yang, Mingyang Zhang, Cheng Li, Michael Bendersky, Marc Najork
In order to better capture sentence level semantic relations within a document, we pre-train the model with a novel masked sentence block language modeling task in addition to the masked word language modeling task used by BERT.
no code implementations • 21 Oct 2019 • Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, Marc Najork
It thus motivates us to study how to leverage cross-document interactions for learning-to-rank in the deep learning framework.
no code implementations • 19 Jun 2019 • Brandon Tran, Maryam Karimzadehgan, Rama Kumar Pasumarthi, Michael Bendersky, Donald Metzler
To address this data challenge, in this paper we propose a domain adaptation approach that fine-tunes the global model to each individual enterprise.
1 code implementation • 30 Nov 2018 • Rama Kumar Pasumarthi, Sebastian Bruch, Xuanhui Wang, Cheng Li, Michael Bendersky, Marc Najork, Jan Pfeifer, Nadav Golbandi, Rohan Anil, Stephan Wolf
We propose TensorFlow Ranking, the first open source library for solving large-scale ranking problems in a deep learning framework.
2 code implementations • 11 Nov 2018 • Qingyao Ai, Xuanhui Wang, Sebastian Bruch, Nadav Golbandi, Michael Bendersky, Marc Najork
To overcome this limitation, we propose a new framework for multivariate scoring functions, in which the relevance score of a document is determined jointly by multiple documents in the list.
no code implementations • 15 Sep 2018 • Jiaming Shen, Maryam Karimzadehgan, Michael Bendersky, Zhen Qin, Donald Metzler
In this paper, we study how to obtain query type in an unsupervised fashion and how to incorporate this information into query-dependent ranking models.
no code implementations • 7 Sep 2016 • Harrie Oosterhuis, Sujith Ravi, Michael Bendersky
Our approach effectively captures the multimodal semantics of queries and videos using state-of-the-art deep neural networks and creates a summary that is both semantically coherent and visually attractive.