no code implementations • 22 Dec 2023 • Chanho Lee, Jinsu Son, Hyounguk Shon, Yunho Jeon, Junmo Kim
Compared to state-of-the-art methods, our proposed method delivers comparable performance on DOTA-v1. 0 and outperforms by 1. 5 mAP on DOTA-v1. 5, all while significantly reducing the model parameters to 16%.
no code implementations • 1 Jan 2021 • Duhyeon Bang, Yunho Jeon, Jin-Hwa Kim, Jiwon Kim, Hyunjung Shim
When a person identifies objects, he or she can think by associating objects to many classes and conclude by taking inter-class relations into account.
no code implementations • 7 Aug 2020 • Youngeun Kim, Sungeun Hong, Seunghan Yang, Sungil Kang, Yunho Jeon, Jiwon Kim
Our Associative Partial Domain Adaptation (APDA) utilizes intra-domain association to actively select out non-trivial anomaly samples in each source-private class that sample-level weighting cannot handle.
no code implementations • 10 Jul 2020 • Yunho Jeon, Yongseok Choi, Jaesun Park, Subin Yi, Dong-Yeon Cho, Jiwon Kim
However, this is likely to restrict the potential of the target model and some transferred knowledge from the source can interfere with the training procedure.
no code implementations • 11 Nov 2018 • Yunho Jeon, Junmo Kim
Furthermore, we extend an ACU to a grouped ACU, which can observe multiple receptive fields in one layer.
2 code implementations • NeurIPS 2018 • Yunho Jeon, Junmo Kim
To cope with various convolutions, we propose a new shift operation called active shift layer (ASL) that formulates the amount of shift as a learnable function with shift parameters.
1 code implementation • CVPR 2017 • Yunho Jeon, Junmo Kim
The convolution layer is the core of the CNN, but few studies have addressed the convolution unit itself.