Search Results for author: Hyunwoo Kim

Found 31 papers, 20 papers with code

Perceptions to Beliefs: Exploring Precursory Inferences for Theory of Mind in Large Language Models

no code implementations8 Jul 2024 Chani Jung, Dongkwan Kim, Jiho Jin, Jiseon Kim, Yeon Seonwoo, Yejin Choi, Alice Oh, Hyunwoo Kim

Our evaluation of eight state-of-the-art LLMs reveals that the models generally perform well in perception inference while exhibiting limited capability in perception-to-belief inference (e. g., lack of inhibitory control).

CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting

1 code implementation16 Apr 2024 Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi

As the utilization of large language models (LLMs) has proliferated worldwide, it is crucial for them to have adequate knowledge and fair representation for diverse global cultures.

Diversity Fairness

Learning Equi-angular Representations for Online Continual Learning

1 code implementation CVPR 2024 Minhyuk Seo, Hyunseo Koh, Wonje Jeung, Minjae Lee, San Kim, Hankook Lee, Sungjun Cho, Sungik Choi, Hyunwoo Kim, Jonghyun Choi

Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e. g., single-epoch training).

Continual Learning

Is this the real life? Is this just fantasy? The Misleading Success of Simulating Social Interactions With LLMs

no code implementations8 Mar 2024 Xuhui Zhou, Zhe Su, Tiwalayo Eisape, Hyunwoo Kim, Maarten Sap

Recent advances in large language models (LLM) have enabled richer social simulations, allowing for the study of various social phenomena.

Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs

1 code implementation5 Mar 2024 Aly M. Kassem, Omar Mahmoud, Niloofar Mireshghallah, Hyunwoo Kim, Yulia Tsvetkov, Yejin Choi, Sherif Saad, Santu Rana

In this paper, we introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent, compared to what is revealed by prompting the target model with the training data directly, which is the dominant approach of quantifying memorization in LLMs.

Memorization

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

no code implementations24 Oct 2023 Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap

Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity.

Question Answering

VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression

1 code implementation15 Mar 2023 Won Jo, Geuntaek Lim, Gwangjin Lee, Hyunwoo Kim, Byungsoo Ko, Yukyung Choi

In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy; thus, several video-level feature-based studies have actively been conducted.

Retrieval Video Retrieval

Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment

1 code implementation4 Nov 2022 Dong Hoon Lee, Sungik Choi, Hyunwoo Kim, Sae-Young Chung

This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization.

Linear evaluation Pseudo Label +2

Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization

1 code implementation7 Oct 2022 Jihwan Jeong, Xiaoyu Wang, Michael Gimelfarb, Hyunwoo Kim, Baher Abdulhai, Scott Sanner

Offline reinforcement learning (RL) addresses the problem of learning a performant policy from a fixed batch of data collected by following some behavior policy.

Continuous Control D4RL +1

Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective

no code implementations16 Jun 2022 Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data while not forgetting past learned classes.

Class Incremental Learning Incremental Learning +2

ProsocialDialog: A Prosocial Backbone for Conversational Agents

1 code implementation25 May 2022 Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, Maarten Sap

With this dataset, we introduce a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.

Dialogue Generation Dialogue Safety Prediction +2

Perception Prioritized Training of Diffusion Models

5 code implementations CVPR 2022 Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, Sungroh Yoon

Diffusion models learn to restore noisy data, which is corrupted with different levels of noise, by optimizing the weighted sum of the corresponding loss terms, i. e., denoising score matching loss.

Denoising

Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning

3 code implementations22 Mar 2021 Zheda Mai, Ruiwen Li, Hyunwoo Kim, Scott Sanner

Online class-incremental continual learning (CL) studies the problem of learning new classes continually from an online non-stationary data stream, intending to adapt to new data while mitigating catastrophic forgetting.

Class Incremental Learning

Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring

1 code implementation15 Feb 2021 Sam Sattarzadeh, Mahesh Sudhakar, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim

However, the average gradient-based terms deployed in this method underestimates the contribution of the representations discovered by the model to its predictions.

Object Localization

Online Continual Learning in Image Classification: An Empirical Survey

1 code implementation25 Jan 2021 Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, Scott Sanner

To better understand the relative advantages of various approaches and the settings where they work best, this survey aims to (1) compare state-of-the-art methods such as MIR, iCARL, and GDumb and determine which works best at different experimental settings; (2) determine if the best class incremental methods are also competitive in domain incremental setting; (3) evaluate the performance of 7 simple but effective trick such as "review" trick and nearest class mean (NCM) classifier to assess their relative impact.

Classification Continual Learning +2

Deep neural network for solving differential equations motivated by Legendre-Galerkin approximation

no code implementations24 Oct 2020 Bryce Chudomelka, Youngjoon Hong, Hyunwoo Kim, Jinyoung Park

Nonlinear differential equations are challenging to solve numerically and are important to understanding the dynamics of many physical systems.

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

no code implementations1 Oct 2020 Sam Sattarzadeh, Mahesh Sudhakar, Anthony Lem, Shervin Mehryar, K. N. Plataniotis, Jongseong Jang, Hyunwoo Kim, Yeonjeong Jeong, Sangmin Lee, Kyunghoon Bae

In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation.

Explainable Artificial Intelligence (XAI)

Online Class-Incremental Continual Learning with Adversarial Shapley Value

3 code implementations31 Aug 2020 Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, Jongseong Jang

As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption.

Continual Learning Open-Ended Question Answering

Batch-level Experience Replay with Review for Continual Learning

1 code implementation11 Jul 2020 Zheda Mai, Hyunwoo Kim, Jihwan Jeong, Scott Sanner

Continual learning is a branch of deep learning that seeks to strike a balance between learning stability and plasticity.

Continual Learning

Will I Sound Like Me? Improving Persona Consistency in Dialogues through Pragmatic Self-Consciousness

1 code implementation EMNLP 2020 Hyunwoo Kim, Byeongchang Kim, Gunhee Kim

Results on Dialogue NLI (Welleck et al., 2019) and PersonaChat (Zhang et al., 2018) dataset show that our approach reduces contradiction and improves consistency of existing dialogue models.

Dialogue Generation Natural Language Inference

Learning Not to Learn: Training Deep Neural Networks with Biased Data

4 code implementations CVPR 2019 Byungju Kim, Hyunwoo Kim, Kyung-Su Kim, Sungjin Kim, Junmo Kim

We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased.

Abstractive Summarization of Reddit Posts with Multi-level Memory Networks

1 code implementation NAACL 2019 Byeongchang Kim, Hyunwoo Kim, Gunhee Kim

We address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model.

Abstractive Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.