Search Results for author: Haejun Lee

Found 12 papers, 4 papers with code

FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering

1 code implementation18 Nov 2022 Akhil Kedia, Mohd Abbas Zaidi, Haejun Lee

Using our proposed method, we outperform the current state-of-the-art method by $2. 5$ Exact Match score on the Natural Question dataset while using only $25\%$ of parameters and $35\%$ of the latency during inference, and $4. 4$ Exact Match on WebQuestions dataset.

Data Augmentation Open-Domain Question Answering +1

You Only Need One Model for Open-domain Question Answering

no code implementations14 Dec 2021 Haejun Lee, Akhil Kedia, Jongwon Lee, Ashwin Paranjape, Christopher D. Manning, Kyoung-Gu Woo

Recent approaches to Open-domain Question Answering refer to an external knowledge base using a retriever model, optionally rerank passages with a separate reranker model and generate an answer using another reader model.

Hard Attention Natural Questions +2

Learning to Generate Questions by Recovering Answer-containing Sentences

no code implementations1 Jan 2021 Seohyun Back, Akhil Kedia, Sai Chetan Chinthakindi, Haejun Lee, Jaegul Choo

We evaluate our method against existing ones in terms of the quality of generated questions as well as the fine-tuned MRC model accuracy after training on the data synthetically generated by our method.

Ranked #4 on Question Generation on SQuAD1.1 (using extra training data)

Machine Reading Comprehension Question Answering +2

SLM: Learning a Discourse Language Representation with Sentence Unshuffling

no code implementations EMNLP 2020 Haejun Lee, Drew A. Hudson, Kangwook Lee, Christopher D. Manning

We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation in a fully self-supervised manner.

Language Modelling

NeurQuRI: Neural Question Requirement Inspector for Answerability Prediction in Machine Reading Comprehension

no code implementations ICLR 2020 Seohyun Back, Sai Chetan Chinthakindi, Akhil Kedia, Haejun Lee, Jaegul Choo

Real-world question answering systems often retrieve potentially relevant documents to a given question through a keyword search, followed by a machine reading comprehension (MRC) step to find the exact answer from them.

Machine Reading Comprehension Question Answering

ASGen: Answer-containing Sentence Generation to Pre-Train Question Generator for Scale-up Data in Question Answering

no code implementations25 Sep 2019 Akhil Kedia, Sai Chetan Chinthakindi, Seohyun Back, Haejun Lee, Jaegul Choo

We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data.

Language Modelling Machine Reading Comprehension +3

On-Device Neural Language Model Based Word Prediction

1 code implementation COLING 2018 Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim

Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.

Automatic Speech Recognition Language Modelling +4

Syllable-level Neural Language Model for Agglutinative Language

no code implementations WS 2017 Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim

Language models for agglutinative languages have always been hindered in past due to myriad of agglutinations possible to any given word through various affixes.

Language Modelling

An Embedded Deep Learning based Word Prediction

1 code implementation6 Jul 2017 Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim

Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.

Language Modelling Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.