Search Results for author: Junmo Kang

Found 13 papers, 3 papers with code

Have You Seen That Number? Investigating Extrapolation in Question Answering Models

no code implementations EMNLP 2021 Jeonghwan Kim, Giwon Hong, Kyung-Min Kim, Junmo Kang, Sung-Hyon Myaeng

Our work rigorously tests state-of-the-art models on DROP, a numerical MRC dataset, to see if they can handle passages that contain out-of-range numbers.

Machine Reading Comprehension Question Answering

Self-Specialization: Uncovering Latent Expertise within Large Language Models

no code implementations29 Sep 2023 Junmo Kang, Hongyin Luo, Yada Zhu, James Glass, David Cox, Alan Ritter, Rogerio Feris, Leonid Karlinsky

Recent works have demonstrated the effectiveness of self-alignment in which a large language model is, by itself, aligned to follow general instructions through the automatic generation of instructional data using a handful of human-written seeds.

Hallucination Instruction Following +2

Schema-Driven Information Extraction from Heterogeneous Tables

1 code implementation23 May 2023 Fan Bai, Junmo Kang, Gabriel Stanovsky, Dayne Freitag, Alan Ritter

We use this collection of annotated tables to evaluate the ability of open-source and API-based language models to extract information from tables covering diverse domains and data formats.

Attribute Extraction Instruction Following +1

Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models

no code implementations2 May 2023 Junmo Kang, Wei Xu, Alan Ritter

Fine-tuning large models is highly effective, however, inference can be expensive and produces carbon emissions.

Knowledge Distillation

Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise

1 code implementation2 May 2023 Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, Joyce Jiyoung Whang

Most existing retrieval-augmented language models (LMs) assume a naive dichotomy within a retrieved document set: query-relevance and irrelevance.

counterfactual Few-Shot Learning +4

Maximizing Efficiency of Language Model Pre-training for Learning Representation

no code implementations13 Oct 2021 Junmo Kang, Suwon Shin, Jeonghwan Kim, Jaeyoung Jo, Sung-Hyon Myaeng

Moreover, we evaluate an initial approach to the problem that has not succeeded in maintaining the accuracy of the model while showing a promising compute efficiency by thoroughly investigating the necessity of the generator module of ELECTRA.

Language Modelling Masked Language Modeling

Leveraging Order-Free Tag Relations for Context-Aware Recommendation

no code implementations EMNLP 2021 Junmo Kang, Jeonghwan Kim, Suwon Shin, Sung-Hyon Myaeng

Tag recommendation relies on either a ranking function for top-$k$ tags or an autoregressive generation method.

TAG

Handling Anomalies of Synthetic Questions in Unsupervised Question Answering

no code implementations COLING 2020 Giwon Hong, Junmo Kang, Doyeon Lim, Sung-Hyon Myaeng

Advances in Question Answering (QA) research require additional datasets for new domains, languages, and types of questions, as well as for performance increases.

Question Answering

Let Me Know What to Ask: Interrogative-Word-Aware Question Generation

no code implementations WS 2019 Junmo Kang, Haritz Puerto San Roman, Sung-Hyon Myaeng

Owing to an increased recall of deciding the interrogative words to be used for the generated questions, the proposed model achieves new state-of-the-art results on the task of QG in SQuAD, improving from 46. 58 to 47. 69 in BLEU-1, 17. 55 to 18. 53 in BLEU-4, 21. 24 to 22. 33 in METEOR, and from 44. 53 to 46. 94 in ROUGE-L.

Question Answering Question Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.