1 code implementation • ECCV 2020 • Wonho Bae, Junhyug Noh, Gunhee Kim
Weakly supervised object localization (WSOL) is a task of localizing an object in an image only using image-level labels.
no code implementations • 30 Dec 2024 • Wonho Bae, Gabriel L. Oliveira, Danica J. Sutherland
Most active learning research has focused on methods which perform well when many labels are available, but can be dramatically worse than random selection when label budgets are small.
1 code implementation • 18 Dec 2024 • Jing Wang, Wonho Bae, Jiahong Chen, Kuangen Zhang, Leonid Sigal, Clarence W. de Silva
The absence of access to source data during adaptation makes it challenging to analytically estimate the domain gap.
no code implementations • 16 Jul 2024 • Wonho Bae, Junhyug Noh, Danica J. Sutherland
The ProbCover method of Yehuda et al. is a well-motivated algorithm for active learning in low-budget regimes, which attempts to "cover" the data distribution with balls of a given radius at selected data points.
no code implementations • 6 Nov 2023 • Wonho Bae, Yi Ren, Mohamad Osama Ahmed, Frederick Tung, Danica J. Sutherland, Gabriel L. Oliveira
Although neural networks are conventionally optimized towards zero training loss, it has been recently learned that targeting a non-zero training loss threshold, referred to as a flood level, often enables better test time generalization.
no code implementations • 6 Nov 2023 • Wonho Bae, Jing Wang, Danica J. Sutherland
Most meta-learning methods assume that the (very small) context set used to establish a new task at test time is passively provided.
no code implementations • 11 Feb 2023 • Yi Ren, Shangmin Guo, Wonho Bae, Danica J. Sutherland
We identify a significant trend in the effect of changes in this initial energy on the resulting features after fine-tuning.
1 code implementation • 27 Jan 2023 • Wonho Bae, Mohamed Osama Ahmed, Frederick Tung, Gabriel L. Oliveira
In this work, we propose to train TPPs in a meta learning framework, where each sequence is treated as a different task, via a novel framing of TPPs as neural processes (NPs).
1 code implementation • 16 Aug 2022 • Jinhwan Seo, Wonho Bae, Danica J. Sutherland, Junhyug Noh, Daijin Kim
Weakly Supervised Object Detection (WSOD) is a task that detects objects in an image using a model trained only on image-level annotations.
Ranked #1 on Weakly Supervised Object Detection on MS-COCO-2017
no code implementations • 25 Jun 2022 • Mohamad Amin Mohamadi, Wonho Bae, Danica J. Sutherland
Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given network's representation: they are often far less expensive to compute and applicable more broadly than infinite width NTKs.
no code implementations • 25 Jun 2022 • Mohamad Amin Mohamadi, Wonho Bae, Danica J. Sutherland
We propose a new method for approximating active learning acquisition strategies that are based on retraining with hypothetically-labeled candidate data points.
no code implementations • 2 May 2022 • Wonho Bae, Junhyug Noh, Milad Jalali Asadabadi, Danica J. Sutherland
Semi-weakly supervised semantic segmentation (SWSSS) aims to train a model to identify objects in images based on a small number of images with pixel-level labels, and many more images with only image-level labels.
no code implementations • 17 Oct 2020 • Yunchao Wei, Shuai Zheng, Ming-Ming Cheng, Hang Zhao, LiWei Wang, Errui Ding, Yi Yang, Antonio Torralba, Ting Liu, Guolei Sun, Wenguan Wang, Luc van Gool, Wonho Bae, Junhyug Noh, Jinhwan Seo, Gunhee Kim, Hao Zhao, Ming Lu, Anbang Yao, Yiwen Guo, Yurong Chen, Li Zhang, Chuangchuang Tan, Tao Ruan, Guanghua Gu, Shikui Wei, Yao Zhao, Mariia Dobko, Ostap Viniavskyi, Oles Dobosevych, Zhendong Wang, Zhenyuan Chen, Chen Gong, Huanqing Yan, Jun He
The purpose of the Learning from Imperfect Data (LID) workshop is to inspire and facilitate the research in developing novel approaches that would harness the imperfect data and improve the data-efficiency during training.