1 code implementation • 27 Dec 2023 • Seunghan Lee, Taeyoung Park, Kibok Lee
SoftCLT is a plug-and-play method for time series contrastive learning that improves the quality of learned representations without bells and whistles.
1 code implementation • 27 Dec 2023 • Seunghan Lee, Taeyoung Park, Kibok Lee
However, we argue that capturing such patch dependencies might not be an optimal strategy for time series representation learning; rather, learning to embed patches independently results in better time series representations.
no code implementations • 13 Sep 2022 • Achin Jain, Kibok Lee, Gurumurthy Swaminathan, Hao Yang, Bernt Schiele, Avinash Ravichandran, Onkar Dabeer
Combined with a matching loss, it can effectively find objects that are similar to the input patch and complete the missing annotations.
1 code implementation • 22 Jul 2022 • Kibok Lee, Hao Yang, Satyaki Chakraborty, Zhaowei Cai, Gurumurthy Swaminathan, Avinash Ravichandran, Onkar Dabeer
Most existing works on few-shot object detection (FSOD) focus on a setting where both pre-training and few-shot learning datasets are from a similar domain.
2 code implementations • NeurIPS 2021 • Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin
Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.
3 code implementations • ICLR 2021 • Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
Contrastive representation learning has shown to be effective to learn representations from unlabeled data.
no code implementations • 24 May 2020 • Kibok Lee, Zhuoyuan Chen, Xinchen Yan, Raquel Urtasun, Ersin Yumer
Our shape-aware adversarial attacks are orthogonal to existing point cloud based attacks and shed light on the vulnerability of 3D deep neural networks.
2 code implementations • ICLR 2020 • Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee
Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.
no code implementations • ICLR 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.
1 code implementation • ICCV 2019 • Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee
Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.
1 code implementation • 31 Jan 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.
4 code implementations • NeurIPS 2018 • Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Ranked #2 on Out-of-Distribution Detection on MS-1M vs. IJB-C
no code implementations • CVPR 2018 • Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee
The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.
3 code implementations • ICLR 2018 • Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin
The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.
no code implementations • 24 May 2017 • Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, Honglak Lee
Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible.
no code implementations • 21 Jun 2016 • Yuting Zhang, Kibok Lee, Honglak Lee
Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction.