Search Results for author: Minsuk Chang

Found 13 papers, 7 papers with code

Interactive Children’s Story Rewriting Through Parent-Children Interaction

no code implementations In2Writing (ACL) 2022 Yoonjoo Lee, Tae Soo Kim, Minsuk Chang, Juho Kim

Storytelling in early childhood provides significant benefits in language and literacy development, relationship building, and entertainment.

Extracting Human Attention through Crowdsourced Patch Labeling

no code implementations22 Mar 2024 Minsuk Chang, SeokHyeon Park, Hyeon Jeon, Aeri Cho, Soohyun Lee, Jinwook Seo

We demonstrated the effectiveness of our method in mitigating bias through improved classification accuracy and the refined focus of the model.

Image Classification Saliency Detection

LLM Comparator: Visual Analytics for Side-by-Side Evaluation of Large Language Models

no code implementations16 Feb 2024 Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon

Automatic side-by-side evaluation has emerged as a promising approach to evaluating the quality of responses from large language models (LLMs).

Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance

no code implementations16 Oct 2023 Jesse Zhang, Jiahui Zhang, Karl Pertsch, Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun, Joseph J. Lim

Instead, our approach BOSS (BOotStrapping your own Skills) learns to accomplish new tasks by performing "skill bootstrapping," where an agent with a set of primitive skills interacts with the environment to practice new skills without receiving reward feedback for tasks outside of the initial skill set.

Language Modelling Large Language Model

CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents

1 code implementation17 Jun 2023 Jeongeun Park, Seungwon Lim, Joonhyung Lee, Sangbeom Park, Minsuk Chang, Youngjae Yu, Sungjoon Choi

In this paper, we focus on inferring whether the given user command is clear, ambiguous, or infeasible in the context of interactive robotic agents utilizing large language models (LLMs).

Question Generation Uncertainty Quantification

Neglected Free Lunch -- Learning Image Classifiers Using Annotation Byproducts

3 code implementations30 Mar 2023 Dongyoon Han, Junsuk Choe, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh

We refer to the new paradigm of training models with annotation byproducts as learning using annotation byproducts (LUAB).

Time Series

LMCanvas: Object-Oriented Interaction to Personalize Large Language Model-Powered Writing Environments

no code implementations27 Mar 2023 Tae Soo Kim, Arghya Sarkar, Yoonjoo Lee, Minsuk Chang, Juho Kim

However, these interfaces provide limited support for writers to create personal tools for their own unique tasks, and may not comprehensively fulfill a writer's needs -- requiring them to continuously switch between interfaces during writing.

Language Modelling Large Language Model

Leveraging Pre-Trained Language Models to Streamline Natural Language Interaction for Self-Tracking

no code implementations31 May 2022 Young-Ho Kim, Sungdong Kim, Minsuk Chang, Sang-Woo Lee

Current natural language interaction for self-tracking tools largely depends on bespoke implementation optimized for a specific tracking theme and data format, which is neither generalizable nor scalable to a tremendous design space of self-tracking.

ClaimDiff: Comparing and Contrasting Claims on Contentious Issues

1 code implementation24 May 2022 Miyoung Ko, Ingyu Seong, Hwaran Lee, Joonsuk Park, Minsuk Chang, Minjoon Seo

With the growing importance of detecting misinformation, many studies have focused on verifying factual claims by retrieving evidence.

Fact Verification Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.