no code implementations • 12 Sep 2024 • WooJin Chung, Jiwoo Hong, Na Min An, James Thorne, Se-Young Yun
Stable pre-training is essential for achieving better-performing language models.
no code implementations • 10 Jun 2024 • Jiwoo Hong, Sayak Paul, Noah Lee, Kashif Rasul, James Thorne, Jongheon Jeong
In this paper, we focus on the alignment of recent text-to-image diffusion models, such as Stable Diffusion XL (SDXL), and find that this "reference mismatch" is indeed a significant problem in aligning these models due to the unstructured nature of visual modalities: e. g., a preference for a particular stylistic aspect can easily induce such a discrepancy.
4 code implementations • 12 Mar 2024 • Jiwoo Hong, Noah Lee, James Thorne
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence.
no code implementations • 5 Apr 2023 • Jiwoo Hong, Yejin Cho, Jaemin Jung, Jiyoung Han, James Thorne
Our approach overcomes this limitation by considering both the sentence-level semantics and the document-level rhetorical structure, resulting in a more robust and style-agnostic approach to detecting political bias in news articles.
no code implementations • NeurIPS 2021 • Sunghyeon Woo, Jeongwoo Park, Jiwoo Hong, Dongsuk Jeon
One of the reasons why it is difficult for the brain to perform backpropagation (BP) is the weight transport problem, which argues forward and feedback neurons cannot share the same synaptic weights during learning in biological neural networks.