Search Results for author: Beomsu Kim

Found 24 papers, 14 papers with code

Generalized Consistency Trajectory Models for Image Manipulation

1 code implementation19 Mar 2024 Beomsu Kim, JaeMin Kim, Jeongsol Kim, Jong Chul Ye

Diffusion-based generative models excel in unconditional generation, as well as on applied tasks such as image editing and restoration.

Denoising Image Manipulation +2

Task-Oriented Diffusion Model Compression

no code implementations31 Jan 2024 Geonung Kim, Beomsu Kim, Eunhyeok Park, Sunghyun Cho

As recent advancements in large-scale Text-to-Image (T2I) diffusion models have yielded remarkable high-quality image generation, diverse downstream Image-to-Image (I2I) applications have emerged.

Denoising Image Generation +2

Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models

1 code implementation NeurIPS 2023 Geon Yeong Park, Jeongsol Kim, Beomsu Kim, Sang Wan Lee, Jong Chul Ye

Despite the remarkable performance of text-to-image diffusion models in image generation tasks, recent studies have raised the issue that generated images sometimes cannot capture the intended semantic contents of the text prompts, which phenomenon is often called semantic misalignment.

Denoising Image Inpainting

Unpaired Image-to-Image Translation via Neural Schrödinger Bridge

1 code implementation24 May 2023 Beomsu Kim, Gihyun Kwon, Kwanyoung Kim, Jong Chul Ye

Diffusion models are a powerful class of generative models which simulate stochastic differential equations (SDEs) to generate data from noise.

Image-to-Image Translation Translation

Bridging Active Exploration and Uncertainty-Aware Deployment Using Probabilistic Ensemble Neural Network Dynamics

no code implementations20 May 2023 Taekyung Kim, Jungwi Mun, Junwon Seo, Beomsu Kim, Seongil Hong

Active exploration, in which a robot directs itself to states that yield the highest information gain, is essential for efficient data collection and minimizing human supervision.

Autonomous Vehicles Model-based Reinforcement Learning

Minimizing Trajectory Curvature of ODE-based Generative Models

1 code implementation27 Jan 2023 Sangyun Lee, Beomsu Kim, Jong Chul Ye

Based on the relationship between the forward process and the curvature, here we present an efficient method of training the forward process to minimize the curvature of generative trajectories without any ODE/SDE simulation.

Attribute

Measuring and Improving Semantic Diversity of Dialogue Generation

1 code implementation11 Oct 2022 Seungju Han, Beomsu Kim, Buru Chang

In this paper, we introduce a new automatic evaluation metric to measure the semantic diversity of generated responses.

Dialogue Generation

Denoising MCMC for Accelerating Diffusion-Based Generative Models

1 code implementation29 Sep 2022 Beomsu Kim, Jong Chul Ye

Diffusion models are powerful generative models that simulate the reverse of diffusion processes using score functions to synthesize data from noise.

Denoising Image Generation

Mitigating Out-of-Distribution Data Density Overestimation in Energy-Based Models

no code implementations30 May 2022 Beomsu Kim, Jong Chul Ye

Deep energy-based models (EBMs), which use deep neural networks (DNNs) as energy functions, are receiving increasing attention due to their ability to learn complex distributions.

Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional Characters with only a Few Utterances

1 code implementation NAACL 2022 Seungju Han, Beomsu Kim, Jin Yong Yoo, Seokjun Seo, SangBum Kim, Enkhbayar Erdenee, Buru Chang

To better reflect the style of the character, PDP builds the prompts in the form of dialog that includes the character's utterances as dialog history.

Chatbot Retrieval

Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

no code implementations21 Feb 2022 Beomsu Kim, Junghoon Seo

Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs).

Adversarial Robustness

Energy-Based Contrastive Learning of Visual Representations

1 code implementation10 Feb 2022 Beomsu Kim, Jong Chul Ye

Contrastive learning is a method of learning visual representations by training Deep Neural Networks (DNNs) to increase the similarity between representations of positive pairs (transformations of the same image) and reduce the similarity between representations of negative pairs (transformations of different images).

Contrastive Learning Self-Supervised Learning

Understanding and Improving the Exemplar-based Generation for Open-domain Conversation

1 code implementation NLP4ConvAI (ACL) 2022 Seungju Han, Beomsu Kim, Seokjun Seo, Enkhbayar Erdenee, Buru Chang

Extensive experiments demonstrate that our proposed training method alleviates the drawbacks of the existing exemplar-based generative models and significantly improves the performance in terms of appropriateness and informativeness.

Informativeness Retrieval

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation

1 code implementation Findings (EMNLP) 2021 Beomsu Kim, Seokjun Seo, Seungju Han, Enkhbayar Erdenee, Buru Chang

G2R consists of two distinct techniques of distillation: the data-level G2R augments the dialogue dataset with additional responses generated by the large-scale generative model, and the model-level G2R transfers the response quality score assessed by the generative model to the score of the retrieval model by the knowledge distillation loss.

Knowledge Distillation Retrieval

Disentangling Label Distribution for Long-tailed Visual Recognition

2 code implementations CVPR 2021 Youngkyu Hong, Seungju Han, Kwanghee Choi, Seokjun Seo, Beomsu Kim, Buru Chang

Although this method surpasses state-of-the-art methods on benchmark datasets, it can be further improved by directly disentangling the source label distribution from the model prediction in the training phase.

Image Classification Long-tail Learning

Filter Style Transfer between Photos

no code implementations ECCV 2020 Jonghwa Yim, Jisung Yoo, Won-joon Do, Beomsu Kim, Jihwan Choe

Unlike conventional style transfer, new technique FST can extract and transfer custom filter style from a filtered style image to a content image.

Image-to-Image Translation Style Transfer +1

MarioNETte: Few-shot Face Reenactment Preserving Identity of Unseen Targets

no code implementations19 Nov 2019 Sungjoo Ha, Martin Kersner, Beomsu Kim, Seokjun Seo, Dongyoung Kim

When there is a mismatch between the target identity and the driver identity, face reenactment suffers severe degradation in the quality of the result, especially in a few-shot setting.

Disentanglement Face Reenactment

Revisiting Classical Bagging with Modern Transfer Learning for On-the-fly Disaster Damage Detector

no code implementations4 Oct 2019 Junghoon Seo, Seungwon Lee, Beomsu Kim, Taegyun Jeon

In this paper, we revisit the classical bootstrap aggregating approach in the context of modern transfer learning for data-efficient disaster damage detection.

Change Detection Disentanglement +3

Temporal Convolution for Real-time Keyword Spotting on Mobile Devices

3 code implementations8 Apr 2019 Seungwoo Choi, Seokjun Seo, Beomjun Shin, Hyeongmin Byun, Martin Kersner, Beomsu Kim, Dongyoung Kim, Sungjoo Ha

In addition, we release the implementation of the proposed and the baseline models including an end-to-end pipeline for training models and evaluating them on mobile devices.

Ranked #14 on Keyword Spotting on Google Speech Commands (Google Speech Commands V2 12 metric)

Keyword Spotting

Bridging Adversarial Robustness and Gradient Interpretability

1 code implementation27 Mar 2019 Beomsu Kim, Junghoon Seo, Taegyun Jeon

Adversarial training is a training scheme designed to counter adversarial attacks by augmenting the training dataset with adversarial examples.

Adversarial Robustness

Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps

2 code implementations13 Feb 2019 Beomsu Kim, Junghoon Seo, SeungHyun Jeon, Jamyoung Koo, Jeongyeol Choe, Taegyun Jeon

Saliency Map, the gradient of the score function with respect to the input, is the most basic technique for interpreting deep neural network decisions.

Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative

no code implementations8 Jun 2018 Junghoon Seo, Jeongyeol Choe, Jamyoung Koo, Seunghyeon Jeon, Beomsu Kim, Taegyun Jeon

SmoothGrad and VarGrad are techniques that enhance the empirical quality of standard saliency maps by adding noise to input.

Improving Occlusion and Hard Negative Handling for Single-Stage Pedestrian Detectors

no code implementations CVPR 2018 Junhyug Noh, Soochan Lee, Beomsu Kim, Gunhee Kim

We propose methods of addressing two critical issues of pedestrian detection: (i) occlusion of target objects as false negative failure, and (ii) confusion with hard negative examples like vertical structures as false positive failure.

Occlusion Handling Pedestrian Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.