Search Results for author: Sung-Hyon Myaeng

Found 16 papers, 3 papers with code

Have You Seen That Number? Investigating Extrapolation in Question Answering Models

no code implementations EMNLP 2021 Jeonghwan Kim, Giwon Hong, Kyung-Min Kim, Junmo Kang, Sung-Hyon Myaeng

Our work rigorously tests state-of-the-art models on DROP, a numerical MRC dataset, to see if they can handle passages that contain out-of-range numbers.

Machine Reading Comprehension Question Answering

Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise

1 code implementation2 May 2023 Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, Joyce Jiyoung Whang

Most existing retrieval-augmented language models (LMs) assume a naive dichotomy within a retrieved document set: query-relevance and irrelevance.

counterfactual Few-Shot Learning +4

Maximizing Efficiency of Language Model Pre-training for Learning Representation

no code implementations13 Oct 2021 Junmo Kang, Suwon Shin, Jeonghwan Kim, Jaeyoung Jo, Sung-Hyon Myaeng

Moreover, we evaluate an initial approach to the problem that has not succeeded in maintaining the accuracy of the model while showing a promising compute efficiency by thoroughly investigating the necessity of the generator module of ELECTRA.

Language Modelling Masked Language Modeling

Leveraging Order-Free Tag Relations for Context-Aware Recommendation

no code implementations EMNLP 2021 Junmo Kang, Jeonghwan Kim, Suwon Shin, Sung-Hyon Myaeng

Tag recommendation relies on either a ranking function for top-$k$ tags or an autoregressive generation method.

TAG

Handling Anomalies of Synthetic Questions in Unsupervised Question Answering

no code implementations COLING 2020 Giwon Hong, Junmo Kang, Doyeon Lim, Sung-Hyon Myaeng

Advances in Question Answering (QA) research require additional datasets for new domains, languages, and types of questions, as well as for performance increases.

Question Answering

Roles and Utilization of Attention Heads in Transformer-based Neural Language Models

1 code implementation ACL 2020 Jae-young Jo, Sung-Hyon Myaeng

Sentence encoders based on the transformer architecture have shown promising results on various natural language tasks.

Sentence

Let Me Know What to Ask: Interrogative-Word-Aware Question Generation

no code implementations WS 2019 Junmo Kang, Haritz Puerto San Roman, Sung-Hyon Myaeng

Owing to an increased recall of deciding the interrogative words to be used for the generated questions, the proposed model achieves new state-of-the-art results on the task of QG in SQuAD, improving from 46. 58 to 47. 69 in BLEU-1, 17. 55 to 18. 53 in BLEU-4, 21. 24 to 22. 33 in METEOR, and from 44. 53 to 46. 94 in ROUGE-L.

Question Answering Question Generation +1

Aligning Open IE Relations and KB Relations using a Siamese Network Based on Word Embedding

no code implementations WS 2019 Rifki Afina Putri, Giwon Hong, Sung-Hyon Myaeng

Open Information Extraction (Open IE) aims at generating entity-relation-entity triples from a large amount of text, aiming at capturing key semantics of the text.

Knowledge Graphs Open Information Extraction +2

Interpretable Word Embedding Contextualization

no code implementations WS 2018 Kyoung-Rok Jang, Sung-Hyon Myaeng, Sang-Bum Kim

In this paper, we propose a method of calibrating a word embedding, so that the semantic it conveys becomes more relevant to the context.

Word Embeddings

Elucidating Conceptual Properties from Word Embeddings

no code implementations WS 2017 Kyoung-Rok Jang, Sung-Hyon Myaeng

In this paper, we introduce a method of identifying the components (i. e. dimensions) of word embeddings that strongly signifies properties of a word.

Decision Making Named Entity Recognition (NER) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.