Search Results for author: Donglin Zhan

Found 6 papers, 2 papers with code

MetaMix: Towards Corruption-Robust Continual Learning With Temporally Self-Adaptive Data Transformation

no code implementations CVPR 2023 Zhenyi Wang, Li Shen, Donglin Zhan, Qiuling Suo, Yanjun Zhu, Tiehang Duan, Mingchen Gao

To make them trustworthy and robust to corruptions deployed in safety-critical scenarios, we propose a meta-learning framework of self-adaptive data augmentation to tackle the corruption robustness in CL.

Continual Learning Data Augmentation +1

Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions

1 code implementation3 Sep 2022 Zhenyi Wang, Li Shen, Le Fang, Qiuling Suo, Donglin Zhan, Tiehang Duan, Mingchen Gao

Two key challenges arise in this more realistic setting: (i) how to use unlabeled data in the presence of a large amount of unlabeled out-of-distribution (OOD) data; and (ii) how to prevent catastrophic forgetting on previously learned task distributions due to the task distribution shift.


Learning To Learn and Remember Super Long Multi-Domain Task Sequence

1 code implementation CVPR 2022 Zhenyi Wang, Li Shen, Tiehang Duan, Donglin Zhan, Le Fang, Mingchen Gao

We propose a domain shift detection technique to capture latent domain change and equip the meta optimizer with it to work in this setting.


Towards Learning to Remember in Meta Learning of Sequential Domains

no code implementations1 Jan 2021 Zhenyi Wang, Tiehang Duan, Donglin Zhan, Changyou Chen

However, a natural generalization to the sequential domain setting to avoid catastrophe forgetting has not been well investigated.

Continual Learning Meta-Learning

Adaptive Transfer Learning of Multi-View Time Series Classification

no code implementations14 Oct 2019 Donglin Zhan, Shiyu Yi, Dongli Xu, Xiao Yu, Denglin Jiang, Siqi Yu, Haoting Zhang, Wenfang Shangguan, Weihua Zhang

In this paper, we first proposed a general adaptive transfer learning framework for multi-view time series data, which shows strong ability in storing inter-view importance value in the process of knowledge transfer.

Classification Density Estimation +5

FIS-GAN: GAN with Flow-based Importance Sampling

no code implementations6 Oct 2019 Shiyu Yi, Donglin Zhan, Wenqing Zhang, Denglin Jiang, Kang An, Hao Wang

Generative Adversarial Networks (GAN) training process, in most cases, apply Uniform or Gaussian sampling methods in the latent space, which probably spends most of the computation on examples that can be properly handled and easy to generate.

Density Estimation Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.