Search Results for author: Sai Chetan Chinthakindi

Found 7 papers, 0 papers with code

Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization

no code implementations Findings (EMNLP) 2021 Akhil Kedia, Sai Chetan Chinthakindi, WonHo Ryu

Inspired by these approaches for a single task setting, this paper proposes to use the finite differences first-order algorithm to calculate this gradient from dot-product of gradients, allowing explicit control on the weightage of this component relative to standard gradients.

 Ranked #1 on Text Summarization on GigaWord (using extra training data)

Meta-Learning Text Summarization

A Zero-Shot Claim Detection Framework Using Question Answering

no code implementations COLING 2022 Revanth Gangi Reddy, Sai Chetan Chinthakindi, Yi R. Fung, Kevin Small, Heng Ji

In recent years, there has been an increasing interest in claim detection as an important building block for misinformation detection.

Misinformation Object +3

Keep Learning: Self-supervised Meta-learning for Learning from Inference

no code implementations EACL 2021 Akhil Kedia, Sai Chetan Chinthakindi

A common approach in many machine learning algorithms involves self-supervised learning on large unlabeled data before fine-tuning on downstream tasks to further improve performance.

Answer Selection Image Classification +5

Learning to Generate Questions by Recovering Answer-containing Sentences

no code implementations1 Jan 2021 Seohyun Back, Akhil Kedia, Sai Chetan Chinthakindi, Haejun Lee, Jaegul Choo

We evaluate our method against existing ones in terms of the quality of generated questions as well as the fine-tuned MRC model accuracy after training on the data synthetically generated by our method.

Ranked #3 on Question Generation on SQuAD1.1 (using extra training data)

Machine Reading Comprehension Question Answering +3

NeurQuRI: Neural Question Requirement Inspector for Answerability Prediction in Machine Reading Comprehension

no code implementations ICLR 2020 Seohyun Back, Sai Chetan Chinthakindi, Akhil Kedia, Haejun Lee, Jaegul Choo

Real-world question answering systems often retrieve potentially relevant documents to a given question through a keyword search, followed by a machine reading comprehension (MRC) step to find the exact answer from them.

Machine Reading Comprehension Question Answering

ASGen: Answer-containing Sentence Generation to Pre-Train Question Generator for Scale-up Data in Question Answering

no code implementations25 Sep 2019 Akhil Kedia, Sai Chetan Chinthakindi, Seohyun Back, Haejun Lee, Jaegul Choo

We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data.

Language Modelling Machine Reading Comprehension +4

Cannot find the paper you are looking for? You can Submit a new open access paper.