no code implementations • 6 Apr 2025 • Ximing Lu, Seungju Han, David Acuna, Hyunwoo Kim, JaeHun Jung, Shrimai Prabhumoye, Niklas Muennighoff, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Yejin Choi
For weak-to-strong improvement, we retrospectively revise R1-671B's traces from the OpenThoughts dataset using R1-distill-32B as the Retro-Search-er, a model 20x smaller.
no code implementations • 29 Mar 2025 • Nam Anh Dinh, Itai Lang, Hyunwoo Kim, Oded Stein, Rana Hanocka
We present Geometry in Style, a new method for identity-preserving mesh stylization.
no code implementations • 17 Feb 2025 • Hyunwoo Kim, Melanie Sclar, Tan Zhi-Xuan, Lance Ying, Sydney Levine, Yang Liu, Joshua B. Tenenbaum, Yejin Choi
Existing LLM reasoning methods have shown impressive capabilities across various tasks, such as solving math and coding problems.
no code implementations • 17 Oct 2024 • Yuling Gu, Oyvind Tafjord, Hyunwoo Kim, Jared Moore, Ronan Le Bras, Peter Clark, Yejin Choi
"), and (c) judgment ("Mary paid for the chips.
no code implementations • 24 Sep 2024 • Xuhui Zhou, Hyunwoo Kim, Faeze Brahman, Liwei Jiang, Hao Zhu, Ximing Lu, Frank Xu, Bill Yuchen Lin, Yejin Choi, Niloofar Mireshghallah, Ronan Le Bras, Maarten Sap
AI agents are increasingly autonomous in their interactions with human users and tools, leading to increased interactional safety risks.
no code implementations • 27 Aug 2024 • Hyunwoo Kim, Itai Lang, Noam Aigerman, Thibault Groueix, Vladimir G. Kim, Rana Hanocka
We propose MeshUp, a technique that deforms a 3D mesh towards multiple target concepts, and intuitively controls the region where each concept is expressed.
1 code implementation • 12 Aug 2024 • Geuntaek Lim, Hyunwoo Kim, Joonsoo Kim, Yukyung Choi
To address these problems, we propose a novel framework that aligns human action knowledge and VLP knowledge in a probabilistic embedding space.
1 code implementation • 18 Jul 2024 • Hyunwoo Kim, Yoonseo Choi, Taehyun Yang, Honggu Lee, Chaneon Park, YongJu Lee, Jin Young Kim, Juho Kim
From qualitative analysis of 250 conversational turns from an in-lab user evaluation of Naver Cue:, a commercial conversational search engine, we propose a taxonomy of 18 users' follow-up query patterns from conversational search, comprising two major axes: (1) users' motivations behind continuing conversations (N = 7) and (2) actions of follow-up queries (N = 11).
1 code implementation • 8 Jul 2024 • Chani Jung, Dongkwan Kim, Jiho Jin, Jiseon Kim, Yeon Seonwoo, Yejin Choi, Alice Oh, Hyunwoo Kim
Our evaluation of eight state-of-the-art LLMs reveals that the models generally perform well in perception inference while exhibiting limited capability in perception-to-belief inference (e. g., lack of inhibitory control).
1 code implementation • 16 Apr 2024 • Huihan Li, Liwei Jiang, Jena D. Hwang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi
As the utilization of large language models (LLMs) has proliferated world-wide, it is crucial for them to have adequate knowledge and fair representation for diverse global cultures.
1 code implementation • CVPR 2024 • Minhyuk Seo, Hyunseo Koh, Wonje Jeung, Minjae Lee, San Kim, Hankook Lee, Sungjun Cho, Sungik Choi, Hyunwoo Kim, Jonghyun Choi
Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e. g., single-epoch training).
no code implementations • 8 Mar 2024 • Xuhui Zhou, Zhe Su, Tiwalayo Eisape, Hyunwoo Kim, Maarten Sap
Recent advances in large language models (LLM) have enabled richer social simulations, allowing for the study of various social phenomena.
1 code implementation • 5 Mar 2024 • Aly M. Kassem, Omar Mahmoud, Niloofar Mireshghallah, Hyunwoo Kim, Yulia Tsvetkov, Yejin Choi, Sherif Saad, Santu Rana
In this paper, we introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent, compared to what is revealed by prompting the target model with the training data directly, which is the dominant approach of quantifying memorization in LLMs.
no code implementations • 5 Feb 2024 • Anthony Sicilia, Hyunwoo Kim, Khyathi Raghavi Chandu, Malihe Alikhani, Jack Hessel
Effective interlocutors account for the uncertain goals, beliefs, and emotions of others.
1 code implementation • 27 Oct 2023 • Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, Yejin Choi
The interactive use of large language models (LLMs) in AI assistants (at work, home, etc.)
no code implementations • 24 Oct 2023 • Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap
Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity.
1 code implementation • 15 Mar 2023 • Won Jo, Geuntaek Lim, Gwangjin Lee, Hyunwoo Kim, Byungsoo Ko, Yukyung Choi
In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy; thus, several video-level feature-based studies have actively been conducted.
Ranked #12 on
Video Retrieval
on FIVR-200K
1 code implementation • 20 Dec 2022 • Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin Choi
Data scarcity has been a long standing issue in the field of open-domain social dialogue.
1 code implementation • 4 Nov 2022 • Dong Hoon Lee, Sungik Choi, Hyunwoo Kim, Sae-Young Chung
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization.
1 code implementation • 7 Oct 2022 • Jihwan Jeong, Xiaoyu Wang, Michael Gimelfarb, Hyunwoo Kim, Baher Abdulhai, Scott Sanner
Offline reinforcement learning (RL) addresses the problem of learning a performant policy from a fixed batch of data collected by following some behavior policy.
no code implementations • 16 Jun 2022 • Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data while not forgetting past learned classes.
1 code implementation • 25 May 2022 • Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, Maarten Sap
With this dataset, we introduce a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.
Ranked #1 on
Dialogue Safety Prediction
on ProsocialDialog
no code implementations • CVPR 2022 • Eunji Kim, Siwon Kim, Jungbeom Lee, Hyunwoo Kim, Sungroh Yoon
Weakly supervised object localization aims to find a target object region in a given image with only weak supervision, such as image-level labels.
5 code implementations • CVPR 2022 • Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, Sungroh Yoon
Diffusion models learn to restore noisy data, which is corrupted with different levels of noise, by optimizing the weighted sum of the corresponding loss terms, i. e., denoising score matching loss.
1 code implementation • CVPR 2022 • Su Ho Han, Sukjun Hwang, Seoung Wug Oh, Yeonchool Park, Hyunwoo Kim, Min-Jung Kim, Seon Joo Kim
We also introduce cooperatively operating modules that aggregate information from available frames, in order to enrich the features for all subtasks in VIS.
1 code implementation • EMNLP 2021 • Hyunwoo Kim, Byeongchang Kim, Gunhee Kim
Empathy is a complex cognitive ability based on the reasoning of others' affective states.
no code implementations • NAACL 2021 • Byeongchang Kim, Hyunwoo Kim, Seokhee Hong, Gunhee Kim
In this work, we ask: How robust are fact checking systems on claims in colloquial style?
4 code implementations • 20 May 2021 • Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, JunSeong Kim, Yongsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Lyu, Younghoon Jeong, InKwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jung-Woo Ha, Kyunghyun Cho
We introduce Korean Language Understanding Evaluation (KLUE) benchmark.
3 code implementations • 22 Mar 2021 • Zheda Mai, Ruiwen Li, Hyunwoo Kim, Scott Sanner
Online class-incremental continual learning (CL) studies the problem of learning new classes continually from an online non-stationary data stream, intending to adapt to new data while mitigating catastrophic forgetting.
1 code implementation • 15 Feb 2021 • Sam Sattarzadeh, Mahesh Sudhakar, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
However, the average gradient-based terms deployed in this method underestimates the contribution of the representations discovered by the model to its predictions.
no code implementations • 15 Feb 2021 • Mahesh Sudhakar, Sam Sattarzadeh, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models.
Computational Efficiency
Explainable Artificial Intelligence (XAI)
1 code implementation • 25 Jan 2021 • Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, Scott Sanner
To better understand the relative advantages of various approaches and the settings where they work best, this survey aims to (1) compare state-of-the-art methods such as MIR, iCARL, and GDumb and determine which works best at different experimental settings; (2) determine if the best class incremental methods are also competitive in domain incremental setting; (3) evaluate the performance of 7 simple but effective trick such as "review" trick and nearest class mean (NCM) classifier to assess their relative impact.
no code implementations • 24 Oct 2020 • Bryce Chudomelka, Youngjoon Hong, Hyunwoo Kim, Jinyoung Park
Nonlinear differential equations are challenging to solve numerically and are important to understanding the dynamics of many physical systems.
no code implementations • 1 Oct 2020 • Sam Sattarzadeh, Mahesh Sudhakar, Anthony Lem, Shervin Mehryar, K. N. Plataniotis, Jongseong Jang, Hyunwoo Kim, Yeonjeong Jeong, Sangmin Lee, Kyunghoon Bae
In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation.
3 code implementations • 31 Aug 2020 • Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, Jongseong Jang
As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption.
1 code implementation • 11 Jul 2020 • Zheda Mai, Hyunwoo Kim, Jihwan Jeong, Scott Sanner
Continual learning is a branch of deep learning that seeks to strike a balance between learning stability and plasticity.
1 code implementation • EMNLP 2020 • Hyunwoo Kim, Byeongchang Kim, Gunhee Kim
Results on Dialogue NLI (Welleck et al., 2019) and PersonaChat (Zhang et al., 2018) dataset show that our approach reduces contradiction and improves consistency of existing dialogue models.
4 code implementations • CVPR 2019 • Byungju Kim, Hyunwoo Kim, Kyung-Su Kim, Sungjin Kim, Junmo Kim
We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased.
1 code implementation • NAACL 2019 • Byeongchang Kim, Hyunwoo Kim, Gunhee Kim
We address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model.