Search Results for author: Akhil Kedia

Found 9 papers, 1 papers with code

Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization

no code implementations Findings (EMNLP) 2021 Akhil Kedia, Sai Chetan Chinthakindi, WonHo Ryu

Inspired by these approaches for a single task setting, this paper proposes to use the finite differences first-order algorithm to calculate this gradient from dot-product of gradients, allowing explicit control on the weightage of this component relative to standard gradients.

 Ranked #1 on Text Summarization on GigaWord (using extra training data)

Meta-Learning Text Summarization

FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering

no code implementations18 Nov 2022 Akhil Kedia, Mohd Abbas Zaidi, Haejun Lee

Using our proposed method, we outperform the current state-of-the-art method by $2. 5$ Exact Match score on the Natural Question dataset while using only $25\%$ of parameters and $35\%$ of the latency during inference, and $4. 4$ Exact Match on WebQuestions dataset.

 Ranked #1 on Question Answering on WebQuestions (using extra training data)

Data Augmentation Open-Domain Question Answering +1

You Only Need One Model for Open-domain Question Answering

no code implementations14 Dec 2021 Haejun Lee, Akhil Kedia, Jongwon Lee, Ashwin Paranjape, Christopher D. Manning, Kyoung-Gu Woo

Recent approaches to Open-domain Question Answering refer to an external knowledge base using a retriever model, optionally rerank passages with a separate reranker model and generate an answer using another reader model.

Hard Attention Natural Questions +2

Keep Learning: Self-supervised Meta-learning for Learning from Inference

no code implementations EACL 2021 Akhil Kedia, Sai Chetan Chinthakindi

A common approach in many machine learning algorithms involves self-supervised learning on large unlabeled data before fine-tuning on downstream tasks to further improve performance.

Answer Selection Image Classification +5

Learning to Generate Questions by Recovering Answer-containing Sentences

no code implementations1 Jan 2021 Seohyun Back, Akhil Kedia, Sai Chetan Chinthakindi, Haejun Lee, Jaegul Choo

We evaluate our method against existing ones in terms of the quality of generated questions as well as the fine-tuned MRC model accuracy after training on the data synthetically generated by our method.

Ranked #3 on Question Generation on SQuAD1.1 (using extra training data)

Machine Reading Comprehension Question Answering +3

NeurQuRI: Neural Question Requirement Inspector for Answerability Prediction in Machine Reading Comprehension

no code implementations ICLR 2020 Seohyun Back, Sai Chetan Chinthakindi, Akhil Kedia, Haejun Lee, Jaegul Choo

Real-world question answering systems often retrieve potentially relevant documents to a given question through a keyword search, followed by a machine reading comprehension (MRC) step to find the exact answer from them.

Machine Reading Comprehension Question Answering

ASGen: Answer-containing Sentence Generation to Pre-Train Question Generator for Scale-up Data in Question Answering

no code implementations25 Sep 2019 Akhil Kedia, Sai Chetan Chinthakindi, Seohyun Back, Haejun Lee, Jaegul Choo

We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data.

Language Modelling Machine Reading Comprehension +4

Cannot find the paper you are looking for? You can Submit a new open access paper.