no code implementations • 7 Oct 2022 • Kuilin Chen, Chi-Guhn Lee
Despite the recent progress in few-shot learning, most methods rely on supervised pretraining or meta-learning on labeled meta-training data and cannot be applied to the case where the pretraining data is unlabeled.
Self-Supervised Learning Unsupervised Few-Shot Image Classification +1
no code implementations • 3 Aug 2022 • Raj Patel, Chia-Wei Hsing, Serkan Sahin, Saeed S. Jahromi, Samuel Palmer, Shivam Sharma, Christophe Michel, Vincent Porte, Mustafa Abid, Stephane Aubert, Pierre Castellani, Chi-Guhn Lee, Samuel Mugel, Roman Orus
We demonstrate that TNN provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN).
no code implementations • 6 Jul 2022 • Amine Mohamed Aboussalah, Min-Jae Kwon, Raj G Patel, Cheng Chi, Chi-Guhn Lee
We apply RIM to diverse real world time series cases to achieve strong performance over non-augmented data on regression, classification, and reinforcement learning tasks.
no code implementations • 26 Apr 2022 • Kuilin Chen, Chi-Guhn Lee
To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification.
no code implementations • NeurIPS 2021 • Michael Gimelfarb, André Barreto, Scott Sanner, Chi-Guhn Lee
Sample efficiency and risk-awareness are central to the development of practical reinforcement learning (RL) for complex decision-making.
no code implementations • 10 Feb 2021 • Kuilin Chen, Chi-Guhn Lee
We propose a computationally efficient attention-based network combined with the Gaussian process regression to generate real-valued sequence, which we call the Attentive-GP.
no code implementations • 1 Jan 2021 • Amine Mohamed Aboussalah, Chi-Guhn Lee
We examine the hypothesis that the concept of symmetry augmentation is fundamentally linked to learning.
no code implementations • ICLR 2021 • Kuilin Chen, Chi-Guhn Lee
For classification problems, we employ the nearest neighbor scheme to make classification on sparsely available data and incorporate intra-class variation, less forgetting regularization and calibration of reference vectors to mitigate catastrophic forgetting.
Ranked #9 on Few-Shot Class-Incremental Learning on mini-Imagenet
1 code implementation • 2 Jul 2020 • Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
Resolving the exploration-exploitation trade-off remains a fundamental problem in the design and implementation of reinforcement learning (RL) algorithms.
no code implementations • 10 Jun 2020 • Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
We demonstrate the effectiveness of this approach for static optimization of smooth functions, and transfer learning in a high-dimensional supply chain problem with cost uncertainty.
no code implementations • 29 Feb 2020 • Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
In this paper, we assume knowledge of estimated source task dynamics and policies, and common sub-goals but different dynamics.
no code implementations • NeurIPS 2018 • Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
Potential based reward shaping is a powerful technique for accelerating convergence of reinforcement learning algorithms.