no code implementations • In2Writing (ACL) 2022 • Yoonjoo Lee, Tae Soo Kim, Minsuk Chang, Juho Kim
Storytelling in early childhood provides significant benefits in language and literacy development, relationship building, and entertainment.
no code implementations • 22 Mar 2024 • Minsuk Chang, SeokHyeon Park, Hyeon Jeon, Aeri Cho, Soohyun Lee, Jinwook Seo
We demonstrated the effectiveness of our method in mitigating bias through improved classification accuracy and the refined focus of the model.
no code implementations • 16 Feb 2024 • Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon
Automatic side-by-side evaluation has emerged as a promising approach to evaluating the quality of responses from large language models (LLMs).
no code implementations • 16 Oct 2023 • Jesse Zhang, Jiahui Zhang, Karl Pertsch, Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun, Joseph J. Lim
Instead, our approach BOSS (BOotStrapping your own Skills) learns to accomplish new tasks by performing "skill bootstrapping," where an agent with a set of primitive skills interacts with the environment to practice new skills without receiving reward feedback for tasks outside of the initial skill set.
1 code implementation • 17 Jun 2023 • Jeongeun Park, Seungwon Lim, Joonhyung Lee, Sangbeom Park, Minsuk Chang, Youngjae Yu, Sungjoon Choi
In this paper, we focus on inferring whether the given user command is clear, ambiguous, or infeasible in the context of interactive robotic agents utilizing large language models (LLMs).
3 code implementations • 30 Mar 2023 • Dongyoon Han, Junsuk Choe, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh
We refer to the new paradigm of training models with annotation byproducts as learning using annotation byproducts (LUAB).
no code implementations • 27 Mar 2023 • Tae Soo Kim, Arghya Sarkar, Yoonjoo Lee, Minsuk Chang, Juho Kim
However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer's needs -- requiring them to continuously switch between interfaces during writing.
1 code implementation • ICCV 2023 • Dongyoon Han, Junsuk Choe, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh
We refer to the new paradigm of training models with annotation byproducts as learning using annotation byproducts (LUAB).
no code implementations • 31 May 2022 • Young-Ho Kim, Sungdong Kim, Minsuk Chang, Sang-Woo Lee
Current natural language interaction for self-tracking tools largely depends on bespoke implementation optimized for a specific tracking theme and data format, which is neither generalizable nor scalable to a tremendous design space of self-tracking.
1 code implementation • 24 May 2022 • Miyoung Ko, Ingyu Seong, Hwaran Lee, Joonsuk Park, Minsuk Chang, Minjoon Seo
With the growing importance of detecting misinformation, many studies have focused on verifying factual claims by retrieving evidence.
2 code implementations • 7 Apr 2022 • Sanghyuk Chun, Wonjae Kim, Song Park, Minsuk Chang, Seong Joon Oh
Image-Text matching (ITM) is a common task for evaluating the quality of Vision and Language (VL) models.
2 code implementations • EMNLP 2021 • Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, WooMyoung Park, Nako Sung
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.
1 code implementation • ACL 2021 • Sungdong Kim, Minsuk Chang, Sang-Woo Lee
We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation.