no code implementations • ICCV 2023 • Yujin Jeong, Wonjeong Ryoo, SeungHyun Lee, Dabin Seo, Wonmin Byeon, Sangpil Kim, Jinkyu Kim
Hence, we propose The Power of Sound (TPoS) model to incorporate audio input that includes both changeable temporal semantics and magnitude.
1 code implementation • 23 Aug 2023 • Soonwoo Kwon, Sojung Kim, SeungHyun Lee, Jin-Young Kim, Suyeong An, Kyuseok Kim
Indeed, when naively training the diagnostic model using CAT response data, we observe that item profiles deviate significantly from the ground-truth.
1 code implementation • NeurIPS 2023 • Hyojun Go, Jinyoung Kim, Yunsung Lee, SeungHyun Lee, Shinhyeok Oh, Hyeongdon Moon, Seungtaek Choi
Through this, our approach addresses the issue of negative transfer in diffusion models by allowing for efficient computation of MTL methods.
no code implementations • 21 Apr 2023 • Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song
As a result, the proposed method is able to deal with more examples in the adaptation process than inductive ones, which can result in better classification performance of the model.
1 code implementation • 14 Mar 2023 • Geumbyeol Hwang, Sunwon Hong, SeungHyun Lee, Sungwoo Park, Gyeongsu Chae
We enhance the efficiency of DisCoHead by integrating a dense motion estimator and the encoder of a generator which are originally separate modules.
no code implementations • 1 Feb 2023 • SeungHyun Lee, Da Young Lee, Sujeong Im, Nan Hee Kim, Sung-Min Park
For this, we conducted goal-conditioned sequencing, which generated a subsequence of treatment history with prepended future goal state, and trained the CDT to model sequential medications required to reach that goal state.
1 code implementation • CVPR 2023 • Hyojun Go, Yunsung Lee, Jin-Young Kim, SeungHyun Lee, Myeongho Jeong, Hyun Seung Lee, Seungtaek Choi
For that, the existing practice is to fine-tune the guidance models with labeled data corrupted with noises.
1 code implementation • 21 Nov 2022 • Hyeongdon Moon, Yoonseok Yang, Jamin Shin, Hangyeol Yu, SeungHyun Lee, Myeongho Jeong, Juneyoung Park, Minsam Kim, Seungtaek Choi
They fail to evaluate the MCQ's ability to assess the student's knowledge of the corresponding target fact.
2 code implementations • 9 Jun 2022 • Sungwook Lee, SeungHyun Lee, Byung Cheol Song
In addition, this paper points out the negative effects of biased features of pre-trained CNNs and emphasizes the importance of the adaptation to the target dataset.
Ranked #24 on Anomaly Detection on MVTec AD
no code implementations • 13 Mar 2022 • Seock-Hwan Noh, Jahyun Koo, SeungHyun Lee, Jongse Park, Jaeha Kung
While several prior works proposed such multi-precision support for DNN accelerators, not only do they focus only on the inference, but also their core utilization is suboptimal at a fixed precision and specific layer types when the training is considered.
1 code implementation • 5 Mar 2022 • SeungHyun Lee, Byung Cheol Song
EKG utilized for the following search iteration is composed of the ensemble knowledge of interim sub-networks, i. e., the by-products of the sub-network evaluation.
5 code implementations • 27 Dec 2021 • Seung Hoon Lee, SeungHyun Lee, Byung Cheol Song
However, the high performance of the ViT results from pre-training using a large-size dataset such as JFT-300M, and its dependence on a large dataset is interpreted as due to low locality inductive bias.
1 code implementation • 20 Oct 2021 • Sanghyuk Lee, SeungHyun Lee, Byung Cheol Song
Experimental results show that CxGrad effectively encourages the backbone to learn task-specific knowledge in the inner-loop and improves the performance of MAML up to a significant margin in both same- and cross-domain few-shot classification.
1 code implementation • 1 Jul 2021 • SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin
Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.
1 code implementation • 28 Apr 2021 • SeungHyun Lee, Byung Cheol Song
Knowledge distillation (KD) is one of the most useful techniques for light-weight neural networks.
no code implementations • 1 Jan 2021 • SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin
As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.
no code implementations • 26 Dec 2020 • SeungHyun Lee, Hyungbin Park
This paper studies the equilibrium price of a continuous time asset traded in a market with heterogeneous investors.
2 code implementations • 4 Jul 2019 • Seunghyun Lee, Byung Cheol Song
Knowledge distillation (KD) is a technique to derive optimal performance from a small student network (SN) by distilling knowledge of a large teacher network (TN) and transferring the distilled knowledge to the small SN.