Search Results for author: Rama Kumar Pasumarthi

Found 6 papers, 2 papers with code

RankDistil: Knowledge Distillation for Ranking

no code implementations AISTATS 2021 Sashank J. Reddi, Rama Kumar Pasumarthi, Aditya Krishna Menon, Ankit Singh Rawat Felix Yu, Seungyeon Kim, Andreas Veit, Sanjiv Kumar

Knowledge distillation is an approach to improve the performance of a student model by using the knowledge of a complex teacher. Despite its success in several deep learning applications, the study of distillation is mostly confined to classification settings.

Document Ranking Knowledge Distillation

Neural Rankers are hitherto Outperformed by Gradient Boosted Decision Trees

no code implementations ICLR 2021 Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, Marc Najork

We first validate this concern by showing that most recent neural LTR models are, by a large margin, inferior to the best publicly available Gradient Boosted Decision Trees (GBDT) in terms of their reported ranking accuracy on benchmark datasets.

Learning-To-Rank

Self-Attentive Document Interaction Networks for Permutation Equivariant Ranking

no code implementations21 Oct 2019 Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, Marc Najork

It thus motivates us to study how to leverage cross-document interactions for learning-to-rank in the deep learning framework.

Information Retrieval Learning-To-Rank +1

Domain Adaptation for Enterprise Email Search

no code implementations19 Jun 2019 Brandon Tran, Maryam Karimzadehgan, Rama Kumar Pasumarthi, Michael Bendersky, Donald Metzler

To address this data challenge, in this paper we propose a domain adaptation approach that fine-tunes the global model to each individual enterprise.

Domain Adaptation Information Retrieval +1

Gated-Attention Architectures for Task-Oriented Language Grounding

1 code implementation22 Jun 2017 Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov

To perform tasks specified by natural language instructions, autonomous agents need to extract semantically meaningful representations of language and map it to visual elements and actions in the environment.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.