Search Results for author: Akash Kumar Mohankumar

Found 7 papers, 4 papers with code

Unified Generative & Dense Retrieval for Query Rewriting in Sponsored Search

no code implementations13 Sep 2022 Akash Kumar Mohankumar, Bhargav Dodla, Gururaj K, Amit Singh

To leverage the strengths of both methods, we propose CLOVER-Unity, a novel approach that unifies generative and dense retrieval methods in one single model.

Retrieval Text Generation +2

Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons

1 code implementation ACL 2022 Akash Kumar Mohankumar, Mitesh M. Khapra

In this work, we introduce Active Evaluation, a framework to efficiently identify the top-ranked system by actively choosing system pairs for comparison using dueling bandit algorithms.

nlg evaluation

Diversity driven Query Rewriting in Search Advertising

no code implementations7 Jun 2021 Akash Kumar Mohankumar, Nikit Begwani, Amit Singh

For head and torso search queries, sponsored search engines use a huge repository of same intent queries and keywords, mined ahead of time.

Retrieval

Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining

1 code implementation23 Sep 2020 Ananya B. Sai, Akash Kumar Mohankumar, Siddhartha Arora, Mitesh M. Khapra

However, no such data is publicly available, and hence existing models are usually trained using a single relevant response and multiple randomly selected responses from other contexts (random negatives).

Dialogue Evaluation

A Survey of Evaluation Metrics Used for NLG Systems

no code implementations27 Aug 2020 Ananya B. Sai, Akash Kumar Mohankumar, Mitesh M. Khapra

The expanding number of NLG models and the shortcomings of the current metrics has led to a rapid surge in the number of evaluation metrics proposed since 2014.

Image Captioning nlg evaluation +1

Towards Transparent and Explainable Attention Models

2 code implementations ACL 2020 Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse.

Attribute

Let's Ask Again: Refine Network for Automatic Question Generation

1 code implementation IJCNLP 2019 Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

It is desired that the generated question should be (i) grammatically correct (ii) answerable from the passage and (iii) specific to the given answer.

Question Generation Question-Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.