Search Results for author: Dong-Yeon Cho

Found 5 papers, 0 papers with code

Sample-based Regularization: A Transfer Learning Strategy Toward Better Generalization

no code implementations10 Jul 2020 Yunho Jeon, Yongseok Choi, Jaesun Park, Subin Yi, Dong-Yeon Cho, Jiwon Kim

However, this is likely to restrict the potential of the target model and some transferred knowledge from the source can interfere with the training procedure.

Transfer Learning

Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators

no code implementations ICLR 2020 Yongseok Choi, Junyoung Park, Subin Yi, Dong-Yeon Cho

Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of them assumed that all meta-training and meta-testing examples came from a single domain.

Classification Few-Shot Learning +2

Discriminative Few-Shot Learning Based on Directional Statistics

no code implementations5 Jun 2019 Junyoung Park, Subin Yi, Yongseok Choi, Dong-Yeon Cho, Jiwon Kim

Metric-based few-shot learning methods try to overcome the difficulty due to the lack of training examples by learning embedding to make comparison easy.

Few-Shot Learning General Classification

Auto-Meta: Automated Gradient Based Meta Learner Search

no code implementations11 Jun 2018 Jaehong Kim, Sangyeul Lee, Sungwan Kim, Moonsu Cha, Jung Kwon Lee, Youngduck Choi, Yongseok Choi, Dong-Yeon Cho, Jiwon Kim

Fully automating machine learning pipelines is one of the key challenges of current artificial intelligence research, since practical machine learning often requires costly and time-consuming human-powered processes such as model design, algorithm development, and hyperparameter tuning.

BIG-bench Machine Learning Meta-Learning +1

Meta Continual Learning

no code implementations11 Jun 2018 Risto Vuorio, Dong-Yeon Cho, Daejoong Kim, Jiwon Kim

This ability is limited in the current deep neural networks by a problem called catastrophic forgetting, where training on new tasks tends to severely degrade performance on previous tasks.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.