no code implementations • EMNLP 2020 • JaeHun Jung, Bokyung Son, Sungwon Lyu
Retrieving the proper knowledge relevant to conversational context is an important challenge in dialogue systems, to engage users with more informative response.
no code implementations • 25 Jul 2024 • JaeHun Jung, Faeze Brahman, Yejin Choi
We present a principled approach to provide LLM-based evaluation with a rigorous guarantee of human agreement.
no code implementations • 20 Mar 2024 • JaeHun Jung, Ximing Lu, Liwei Jiang, Faeze Brahman, Peter West, Pang Wei Koh, Yejin Choi
The current winning recipe for automatic summarization is using proprietary large-scale language models (LLMs) such as ChatGPT as is, or imitation learning from them as teacher models.
1 code implementation • 13 Feb 2024 • Jillian Fisher, Ximing Lu, JaeHun Jung, Liwei Jiang, Zaid Harchaoui, Yejin Choi
The permanence of online content combined with the enhanced authorship identification techniques calls for stronger computational methods to protect the identity and privacy of online authorship when needed, e. g., blind reviews for scientific papers, anonymous online reviews, or anonymous interactions in the mental health forums.
1 code implementation • 13 Nov 2023 • Skyler Hallinan, Faeze Brahman, Ximing Lu, JaeHun Jung, Sean Welleck, Yejin Choi
We propose STEER: Unified Style Transfer with Expert Reinforcement, a unified frame-work developed to overcome the challenge of limited parallel data for style transfer.
no code implementations • 26 May 2023 • JaeHun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi
We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot perform these tasks.
no code implementations • 24 May 2022 • JaeHun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi
Despite their impressive capabilities, large pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this.
no code implementations • 19 Dec 2020 • JaeHun Jung, Jinhong Jung, U Kang
However, most of the existing mod-els for TKG completion extend static KG embeddings that donot fully exploit TKG structure, thus lacking in 1) account-ing for temporally relevant events already residing in the lo-cal neighborhood of a query, and 2) path-based inference that facilitates multi-hop reasoning and better interpretability.