Search Results for author: Hae Beom Lee

Found 23 papers, 12 papers with code

Meta Variance Transfer: Learning to Augment from the Others

no code implementations ICML 2020 Seong-Jin Park, Seungju Han, Ji-won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han, Sung Ju Hwang

Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformation, background noise, and change in lighting conditions.

Face Recognition Meta-Learning +1

Delta-AI: Local objectives for amortized inference in sparse graphical models

1 code implementation3 Oct 2023 Jean-Pierre Falet, Hae Beom Lee, Nikolay Malkin, Chen Sun, Dragos Secrieru, Thomas Jiralerspong, Dinghuai Zhang, Guillaume Lajoie, Yoshua Bengio

We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs), which we call $\Delta$-amortized inference ($\Delta$-AI).

Dataset Condensation with Latent Space Knowledge Factorization and Sharing

1 code implementation21 Aug 2022 Hae Beom Lee, Dong Bok Lee, Sung Ju Hwang

In this paper, we introduce a novel approach for systematically solving dataset condensation problem in an efficient manner by exploiting the regularity in a given dataset.

Dataset Condensation

Meta Mirror Descent: Optimiser Learning for Fast Convergence

no code implementations5 Mar 2022 Boyan Gao, Henry Gouk, Hae Beom Lee, Timothy M. Hospedales

The resulting framework, termed Meta Mirror Descent (MetaMD), learns to accelerate optimisation speed.

Meta-Learning

Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty

no code implementations12 Oct 2021 Jeffrey Willette, Hae Beom Lee, Juho Lee, Sung Ju Hwang

Numerous recent works utilize bi-Lipschitz regularization of neural network layers to preserve relative distances between data instances in the feature spaces of each layer.

Meta-Learning Out of Distribution (OOD) Detection

Online Hyperparameter Meta-Learning with Hypergradient Distillation

no code implementations ICLR 2022 Hae Beom Lee, Hayeon Lee, Jaewoong Shin, Eunho Yang, Timothy Hospedales, Sung Ju Hwang

Many gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optimization, which can be considered as hyperparameters.

Hyperparameter Optimization Knowledge Distillation +1

Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning

no code implementations ICLR 2022 Seanie Lee, Hae Beom Lee, Juho Lee, Sung Ju Hwang

Thanks to the gradients aligned between tasks by our method, the model becomes less vulnerable to negative transfer and catastrophic forgetting.

Continual Learning Multi-Task Learning +1

Meta Learning Low Rank Covariance Factors for Energy Based Deterministic Uncertainty

no code implementations ICLR 2022 Jeffrey Ryan Willette, Hae Beom Lee, Juho Lee, Sung Ju Hwang

Numerous recent works utilize bi-Lipschitz regularization of neural network layers to preserve relative distances between data instances in the feature spaces of each layer.

Meta-Learning Out of Distribution (OOD) Detection

Large-Scale Meta-Learning with Continual Trajectory Shifting

no code implementations14 Feb 2021 Jaewoong Shin, Hae Beom Lee, Boqing Gong, Sung Ju Hwang

Meta-learning of shared initialization parameters has shown to be highly effective in solving few-shot learning tasks.

Few-Shot Learning Multi-Task Learning

Meta-Learned Confidence for Transductive Few-shot Learning

no code implementations1 Jan 2021 Seong Min Kye, Hae Beom Lee, Hoirin Kim, Sung Ju Hwang

A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples, or confidence-weighted average of all the query samples.

Few-Shot Learning

MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures

1 code implementation NeurIPS 2020 Jeongun Ryu, Jaewoong Shin, Hae Beom Lee, Sung Ju Hwang

As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize to heterogeneous tasks and architectures.

Meta-Learning Transfer Learning

Meta Dropout: Learning to Perturb Latent Features for Generalization

2 code implementations ICLR 2020 Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang

Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner.

BIG-bench Machine Learning Meta-Learning

Meta-Learning for Short Utterance Speaker Recognition with Imbalance Length Pairs

1 code implementation6 Apr 2020 Seong Min Kye, Youngmoon Jung, Hae Beom Lee, Sung Ju Hwang, Hoirin Kim

By combining these two learning schemes, our model outperforms existing state-of-the-art speaker verification models learned with a standard supervised learning framework on short utterance (1-2 seconds) on the VoxCeleb datasets.

Meta-Learning Speaker Identification +2

Meta-Learned Confidence for Few-shot Learning

1 code implementation27 Feb 2020 Seong Min Kye, Hae Beom Lee, Hoirin Kim, Sung Ju Hwang

To tackle this issue, we propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries such that they improve the model's transductive inference performance on unseen tasks.

Few-Shot Image Classification Few-Shot Learning

Learning to Generalize to Unseen Tasks with Bilevel Optimization

no code implementations5 Aug 2019 Hayeon Lee, Donghyun Na, Hae Beom Lee, Sung Ju Hwang

To tackle this issue, we propose a simple yet effective meta-learning framework for metricbased approaches, which we refer to as learning to generalize (L2G), that explicitly constrains the learning on a sampled classification task to reduce the classification error on a randomly sampled unseen classification task with a bilevel optimization scheme.

Bilevel Optimization Classification +2

Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks

1 code implementation ICLR 2020 Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang

While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed.

Bayesian Inference Meta-Learning +1

Meta Dropout: Learning to Perturb Features for Generalization

1 code implementation30 May 2019 Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang

Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner.

BIG-bench Machine Learning Meta-Learning

ADAPTIVE NETWORK SPARSIFICATION VIA DEPENDENT VARIATIONAL BETA-BERNOULLI DROPOUT

no code implementations27 Sep 2018 Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang

With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.

Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout

1 code implementation28 May 2018 Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang

With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.

DropMax: Adaptive Stochastic Softmax

no code implementations ICLR 2018 Hae Beom Lee, Juho Lee, Eunho Yang, Sung Ju Hwang

Moreover, the learning of dropout probabilities for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes.

Classification General Classification +1

DropMax: Adaptive Variational Softmax

4 code implementations NeurIPS 2018 Hae Beom Lee, Juho Lee, Saehoon Kim, Eunho Yang, Sung Ju Hwang

Moreover, the learning of dropout rates for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes.

Classification General Classification +1

Deep Asymmetric Multi-task Feature Learning

1 code implementation ICML 2018 Hae Beom Lee, Eunho Yang, Sung Ju Hwang

We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which can learn deep representations shared across multiple tasks while effectively preventing negative transfer that may happen in the feature sharing process.

Image Classification Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.