1 code implementation • COLING 2022 • Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz
Behavioural consistency is a critical condition for a language model (LM) to become trustworthy like humans.
no code implementations • 5 Jun 2023 • Myeongjun Jang, Bodhisattwa Prasad Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu
While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs.
1 code implementation • Findings (NAACL) 2022 • Myeongjun Jang, Frank Mtumbuka, Thomas Lukasiewicz
To alleviate the issue, we propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis.
no code implementations • COLING 2022 • Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, Eric Davis
To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks.
no code implementations • 29 Aug 2021 • Myeongjun Jang, Thomas Lukasiewicz
The most predominant form of these models is the explain-then-predict (EtP) structure, which first generates explanations and uses them for making decisions.
no code implementations • 29 Aug 2021 • Myeongjun Jang, Thomas Lukasiewicz
The recent development in pretrained language models trained in a self-supervised fashion, such as BERT, is driving rapid progress in the field of NLP.
no code implementations • 15 Aug 2021 • Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz
Consistency, which refers to the capability of generating the same predictions for semantically similar contexts, is a highly desirable property for a sound language understanding model.
no code implementations • 16 Jan 2019 • Myeongjun Jang, Pilsung Kang
Sentence embedding is a significant research topic in the field of natural language processing (NLP).
1 code implementation • 16 Aug 2018 • Myeongjun Jang, Pilsung Kang
However, because the performances of sentence classification and sentiment analysis can be enhanced by using a simple sentence representation method, it is not sufficient to claim that these models fully reflect the meanings of sentences based on good performances for such tasks.
1 code implementation • 9 Feb 2018 • Myeongjun Jang, Seungwan Seo, Pilsung Kang
In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN--SVAE), to better capture the global latent information of a sequence of words.