Search Results for author: SeungHyun Lee

Found 18 papers, 12 papers with code

The Power of Sound (TPoS): Audio Reactive Video Generation with Stable Diffusion

no code implementations ICCV 2023 Yujin Jeong, Wonjeong Ryoo, SeungHyun Lee, Dabin Seo, Wonmin Byeon, Sangpil Kim, Jinkyu Kim

Hence, we propose The Power of Sound (TPoS) model to incorporate audio input that includes both changeable temporal semantics and magnitude.

Video Generation

Addressing Selection Bias in Computerized Adaptive Testing: A User-Wise Aggregate Influence Function Approach

1 code implementation23 Aug 2023 Soonwoo Kwon, Sojung Kim, SeungHyun Lee, Jin-Young Kim, Suyeong An, Kyuseok Kim

Indeed, when naively training the diagnostic model using CAT response data, we observe that item profiles deviate significantly from the ground-truth.

Selection bias

Addressing Negative Transfer in Diffusion Models

1 code implementation NeurIPS 2023 Hyojun Go, Jinyoung Kim, Yunsung Lee, SeungHyun Lee, Shinhyeok Oh, Hyeongdon Moon, Seungtaek Choi

Through this, our approach addresses the issue of negative transfer in diffusion models by allowing for efficient computation of MTL methods.

Clustering Denoising +1

Task-Adaptive Pseudo Labeling for Transductive Meta-Learning

no code implementations21 Apr 2023 Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song

As a result, the proposed method is able to deal with more examples in the adaptation process than inductive ones, which can result in better classification performance of the model.

Meta-Learning

DisCoHead: Audio-and-Video-Driven Talking Head Generation by Disentangled Control of Head Pose and Facial Expressions

1 code implementation14 Mar 2023 Geumbyeol Hwang, Sunwon Hong, SeungHyun Lee, Sungwoo Park, Gyeongsu Chae

We enhance the efficiency of DisCoHead by integrating a dense motion estimator and the encoder of a generator which are originally separate modules.

Talking Head Generation

Clinical Decision Transformer: Intended Treatment Recommendation through Goal Prompting

no code implementations1 Feb 2023 SeungHyun Lee, Da Young Lee, Sujeong Im, Nan Hee Kim, Sung-Min Park

For this, we conducted goal-conditioned sequencing, which generated a subsequence of treatment history with prepended future goal state, and trained the CDT to model sequential medications required to reach that goal state.

Recommendation Systems

Evaluating the Knowledge Dependency of Questions

1 code implementation21 Nov 2022 Hyeongdon Moon, Yoonseok Yang, Jamin Shin, Hangyeol Yu, SeungHyun Lee, Myeongho Jeong, Juneyoung Park, Minsam Kim, Seungtaek Choi

They fail to evaluate the MCQ's ability to assess the student's knowledge of the corresponding target fact.

Multiple-choice

CFA: Coupled-hypersphere-based Feature Adaptation for Target-Oriented Anomaly Localization

2 code implementations9 Jun 2022 Sungwook Lee, SeungHyun Lee, Byung Cheol Song

In addition, this paper points out the negative effects of biased features of pre-trained CNNs and emphasizes the importance of the adaptation to the target dataset.

Transfer Learning Unsupervised Anomaly Detection

FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support

no code implementations13 Mar 2022 Seock-Hwan Noh, Jahyun Koo, SeungHyun Lee, Jongse Park, Jaeha Kung

While several prior works proposed such multi-precision support for DNN accelerators, not only do they focus only on the inference, but also their core utilization is suboptimal at a fixed precision and specific layer types when the training is considered.

Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning

1 code implementation5 Mar 2022 SeungHyun Lee, Byung Cheol Song

EKG utilized for the following search iteration is composed of the ensemble knowledge of interim sub-networks, i. e., the by-products of the sub-network evaluation.

Knowledge Distillation

Vision Transformer for Small-Size Datasets

5 code implementations27 Dec 2021 Seung Hoon Lee, SeungHyun Lee, Byung Cheol Song

However, the high performance of the ViT results from pre-training using a large-size dataset such as JFT-300M, and its dependence on a large dataset is interpreted as due to low locality inductive bias.

Image Classification Inductive Bias

Contextual Gradient Scaling for Few-Shot Learning

1 code implementation20 Oct 2021 Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song

Experimental results show that CxGrad effectively encourages the backbone to learn task-specific knowledge in the inner-loop and improves the performance of MAML up to a significant margin in both same- and cross-domain few-shot classification.

Cross-Domain Few-Shot

Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble

1 code implementation1 Jul 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.

Offline RL reinforcement-learning +1

Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets

no code implementations1 Jan 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.

D4RL Offline RL +3

Conditions for bubbles to arise under heterogeneous beliefs

no code implementations26 Dec 2020 SeungHyun Lee, Hyungbin Park

This paper studies the equilibrium price of a continuous time asset traded in a market with heterogeneous investors.

Graph-based Knowledge Distillation by Multi-head Attention Network

2 code implementations4 Jul 2019 Seunghyun Lee, Byung Cheol Song

Knowledge distillation (KD) is a technique to derive optimal performance from a small student network (SN) by distilling knowledge of a large teacher network (TN) and transferring the distilled knowledge to the small SN.

Inductive Bias Knowledge Distillation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.