Search Results for author: Myeongjun Jang

Found 10 papers, 4 papers with code

BECEL: Benchmark for Consistency Evaluation of Language Models

1 code implementation COLING 2022 Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz

Behavioural consistency is a critical condition for a language model (LM) to become trustworthy like humans.

Language Modelling

KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations

no code implementations5 Jun 2023 Myeongjun Jang, Bodhisattwa Prasad Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu

While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs.

Adversarial Attack

Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence

1 code implementation Findings (NAACL) 2022 Myeongjun Jang, Frank Mtumbuka, Thomas Lukasiewicz

To alleviate the issue, we propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis.

Language Modelling Negation

KOBEST: Korean Balanced Evaluation of Significant Tasks

no code implementations COLING 2022 Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, Eric Davis

To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks.

Are Training Resources Insufficient? Predict First Then Explain!

no code implementations29 Aug 2021 Myeongjun Jang, Thomas Lukasiewicz

The most predominant form of these models is the explain-then-predict (EtP) structure, which first generates explanations and uses them for making decisions.

Decision Making Explanation Generation

NoiER: An Approach for Training more Reliable Fine-TunedDownstream Task Models

no code implementations29 Aug 2021 Myeongjun Jang, Thomas Lukasiewicz

The recent development in pretrained language models trained in a self-supervised fashion, such as BERT, is driving rapid progress in the field of NLP.

Out of Distribution (OOD) Detection

Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models

no code implementations15 Aug 2021 Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz

Consistency, which refers to the capability of generating the same predictions for semantically similar contexts, is a highly desirable property for a sound language understanding model.

Paraphrase Identification

Paraphrase Thought: Sentence Embedding Module Imitating Human Language Recognition

1 code implementation16 Aug 2018 Myeongjun Jang, Pilsung Kang

However, because the performances of sentence classification and sentiment analysis can be enhanced by using a simple sentence representation method, it is not sufficient to claim that these models fully reflect the meanings of sentences based on good performances for such tasks.

Document Classification General Classification +8

Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning

1 code implementation9 Feb 2018 Myeongjun Jang, Seungwan Seo, Pilsung Kang

In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN--SVAE), to better capture the global latent information of a sequence of words.

Imputation Language Modelling +6

Cannot find the paper you are looking for? You can Submit a new open access paper.