no code implementations • CVPR 2023 • Dongwan Kim, Bohyung Han
A primary goal of class-incremental learning is to strike a balance between stability and plasticity, where models should be both stable enough to retain knowledge learned from previously seen classes, and plastic enough to learn concepts from new classes.
no code implementations • 28 Feb 2022 • Dongwan Kim, Yi-Hsuan Tsai, Yumin Suh, Masoud Faraki, Sparsh Garg, Manmohan Chandraker, Bohyung Han
First, a gradient conflict in training due to mismatched label spaces is identified and a class-independent binary cross-entropy loss is proposed to alleviate such label conflicts.
no code implementations • 21 Feb 2022 • Huy Truong Dinh, Dongwan Kim, Daehee Kim
Moreover, by using linear transformation, the computational time of the optimization problem of the proposed system-centric CEMS is only 118. 2 s for a 500-home community, which is a short time for day-ahead scheduling of a community.
no code implementations • NeurIPS 2021 • Sanghyeok Chu, Dongwan Kim, Bohyung Han
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes, and the model is able to learn more debiased and disentangled feature representations.
1 code implementation • CVPR 2021 • Seungmin Lee, Dongwan Kim, Bohyung Han
We focus on designing an image-text compositor, i. e., integrating multi-modal inputs to produce a representation similar to that of the target image.
Ranked #15 on Image Retrieval on Fashion IQ
1 code implementation • ICCV 2019 • Seungmin Lee, Dongwan Kim, Namil Kim, Seong-Gyun Jeong
Recent works on domain adaptation exploit adversarial training to obtain domain-invariant feature representations from the joint learning of feature extractor and domain discriminator networks.
Ranked #16 on Domain Adaptation on VisDA2017
no code implementations • ECCV 2020 • Seonguk Seo, Yumin Suh, Dongwan Kim, Geeho Kim, Jongwoo Han, Bohyung Han
We propose a simple but effective multi-source domain generalization technique based on deep neural networks by incorporating optimized normalization layers that are specific to individual domains.
Ranked #3 on Unsupervised Domain Adaptation on PACS